What to do with too many images


 By EDWARD MCCAIN , Reynolds Journalism Institute

Edward McCain, Digital Curator of Journalism, leads the Journalism Digital News Archive (JDNA), which addresses issues of access and preservation of digital news collections.

RJI digital curator Edward McCain begins quest to rank and sort photo files through AI

Sometimes it seems like the digital world is turning us all into virtual hoarders. As storage costs go down, it becomes easier to pile our binary creations into a digital attic, never looking back. In the case of news organizations, we are constantly saving text, photos, audio, video and datasets — some published, others saved for possible use later.

Do we really need all of those images and footage? It’s like a Google search result that brings back millions of possibilities: Rare is the day a person uses more than the first few pages.

We want our digital space to look less like the building you can’t walk through on the TV series “Hoarders” and more like the “full but cozy” home with a friendly fire in the corner.

Getting rid of the bad stuff isn’t easy. Going through 1,000 or 10,000 “outtakes” of photos requires people and time. Can we create a system that automatically disposes of lower quality images while keeping enough so that an editor has flexibility in future publishing?

With artificial intelligence, or AI, our goal is not so much to replace humans as to give them a digital assist on some of the grunt work so they can focus on higher-level tasks. By giving editors more time to think, plan and create — things that people still do better than computers — news organizations can improve the quantity and quality of their content.

Reducing the sheer number of images allows editors to narrow the set of possibilities more quickly and to find just the right photograph more easily. As they say, time is money. If AI can save staff time, news organizations will save money.

The human baseline

At the Columbia Missourian, the community newspaper run by faculty and staffed by students at the University of Missouri School of Journalism, we’ve experienced these issues as the available space gets smaller and smaller on our existing RAID (Redundant Array of Independent Disks, a.k.a. a bunch of hard drives that work together to protect against loss).

In 2018, the Columbia Missourian photo department generated about 5 terabytes (TB) of visual content. A terabyte provides enough space to hold 300,000 photos or 1,000 days of video. Since the late 1990s, the Missourian digital photo/video backup has grown to nearly 20 TB.

One approach would be to have photo editors identify the keepers from the photos we don’t need.

To test that approach, Missourian Director of Photography Brian Kratzer assigned graduate students with editing experience to winnow the best images from two football games.

Football games require lots of photos; you never know when a critical play will happen.

Likewise, during a game, editors are working against tight deadlines and don’t have time to think about what photos might be useful down the line. As a result, potentially valuable shots simply get stored along with many more unusable photos.

The first round of this image editing test involved 1,000 photos, of which 600 were identified to be removed during a 40-minute period — a processing rate of 2.4 seconds per image. The second test resulted in a set of 800 “keeper” images from a pool of 2,800, an overall processing rate of 1.9 seconds per frame.

That’s an average overall editing rate of 2.2 images per second. For the 5 TB of photos captured last year, around 1.5 million frames would need to be evaluated, which would require approximately 917 hours to accomplish.

At minimum wage of $8.60 an hour in Missouri, that would add up to $7,886.20, not the kind of money that the Missourian or many other small newspapers have jangling about in their couch cushions.

Keep in mind, too, that safe computing requires backing up with at least three copies. So multiply that 5 TB of storage. Sure, hard drives are cheaper. But they aren’t free.

Bring on the coders

So we’re turning to Professor Sean Goggins from the University of Missouri College of Engineering and his software engineering class for help.

We invited these up-and-coming computer coders to experiment with open-source (a.k.a. “free as in free kittens”) computer algorithms to evaluate digital photos.

We have copied about 7,000 JPEG images from two MU football games onto a secure server that will allow the class to experiment with the images we need to rate.

We included information about the kinds of metadata (additional information added to the photos) that might give the software clues about the quality of the images. For example, photo editors have tagged some images with a “color class” or “star rating,” which is embedded in the code of the image file. There may be other clues the computer programs can use to tell the software to assign a rating.

In the next few weeks the class will test their code against the expectations of human editors. There may be a need for humans to train the algorithms. Each time a photo editor agrees or disagrees with the software, the software takes note – thus the moniker “machine learning.”

The machines might start out slow and need a lot of training and a ton of data over time, but they never forget their lessons.

Will it work? I’ll keep you updated.

Leave a Reply

Your email address will not be published. Required fields are marked *