Powered by Smartsupp
Scroll UP

The Dr. Jekyll and Mr. Hyde of Artificial Intelligence for Newspapers

Editorial ethics is on the line

The question of ethics, when it comes to editorial content, isn't new…just some of the tools available are. When it comes to generative AI, it’s important to decide now what your publication will allow.

Julia Viers
Russell Viers
October 19, 2023
13 minutes with your human brain

I still remember the first time I showed Photoshop’s Clone Stamp Tool to a newspaper audience. It was 1997 and it was called the Rubber Stamp Tool back then. The room was silent in amazement as I eliminated various marks, spots, and objects from the image. I think I removed a hot air balloon from the sky in a photo, as well.

Many people have come up to me over the years showing how they have used this tool to “fix” photos as well as share stories of how people have abused the power of that simple technique, only to lose their jobs.

Back then, the line in the sand, for news photos, anyway, was that you could clean up a photo, like removing scratches and spots, etc., but no manipulation. None. Stories of people adding basketballs to photos to make the shots more exciting, or removing the oxygen hose from a town mayor’s face come to mind, as well as too many others. People lost jobs over this type of manipulation of photos. I’ll bet you have stories of your own.

And now we have generative artificial intelligence (AI) built into Photoshop, which gives any user the simple-to-use-tools that can manipulate a photo way beyond anything we dreamed of doing with the Clone Stamp Tool. Want to remove things? Click. Want to add things? Type and click. BOOM…you have the photo you meant to take.

Yeah, it’s pretty amazing, and scary, at the same time.

And Photoshop’s new Generative Crop Tool? I shake my head thinking about the implications. This new tool allows the user to crop beyond the edges of the photo and, with a simple click of a button, the blank area around the image will be filled with what Photoshop thinks should be there…in seconds. I’ve used it in testing. I’ve taken square photos and added width to make it a landscape. Obviously, what’s added doesn’t really exist. Adobe’s generative AI brain, Firefly, is creating what it thinks would look natural and believable. 

AI vs. Generative AI

Notice I’ve been calling it “generative AI?” It matters, as this is the Mr. Hyde to the Dr. Jekyll of AI we’ve had in Photoshop, and many other applications, for years now.

Mild mannered Dr. Jekyll’s AI, in our graphic arts world, is rather harmless, as it does a particular task that we can do ourselves, but it has a trained mind behind it that can help us identify, and fix things, quicker. For example, all of Adobe’s photo adjustment tools have new masking capabilities that will quickly identify the subject of a photo, or the sky, or even people. In the case of people, it not only identifies them, it can identify the various people in the photo, then allow you to select just faces, or eyes, or lips, or facial hair, and more.

I can do this exact same thing with my Lasso Tool, or maybe Magic Wand Tool, or even the Quick Select Tool, or many other ways, but it would take a lot longer…and a steady hand. 

This technology makes a selection based on information it has been taught in order to be able to identify what a sky looks like, or people, etc. But it’s not changing anything. It’s merely selecting it so we can then make some adjustments to lighting, color values, etc…things we’ve done for decades, just not with the help of Dr. Jekyll’s ability to select things for us.

There are many other AI features I would lump in this category. Things like JPEG Artifact Removal, Photo Restoration, Depth Blur, Super Zoom, and others. These are all under Filter> Neural Filters, which we’ve had for several versions of the Creative Cloud. They are run by Adobe’s Sensei AI brain, which, unlike the Mr. Hyde of Firefly, focuses on functionalities.

The line between AI and generative AI gets a little blurry at times, however, as there are tools in the Neural Filters panel that are, in my mind, generative, in that they literally change the picture, manipulating it to tell a different story. Smart Portrait is a perfect example of how a user is a few clicks away from literally changing the photo completely from what was shot with the camera.

To Use, or Not to Use Generative AI

Now, I’m not in a position to tell you to use, or not to use these tools or where you draw your line in the sand as to what’s allowed, and not. I can tell you that for my photos, and my art, I don’t let Mr. Hyde anywhere near my work. I will not use generative AI to change or manipulate my photos in any way. On the other hand, I rely on Dr. Jekyll’s AI tools to help me make selections faster so then I can make the same lighting and color adjustments I’ve been making for years, only faster. 

Keep in mind that Adobe’s not the only player in this game. In fact, they are a little late to the game, compared to MidJourney and other tools that can generate art just by typing a few key words into the prompt. There are generative AI tools for music and video creation, and much more. It’s really hard to keep track of because, as I write this article, things are already changing. 

And it’s not just visuals. Mr. Hyde now has tools for writing text, in the form of ChatGBT and many others. ChatGBT is only a year old and it’s had an incredible impact on how the world creates content.

I probably could have let ChatGBT write this article for me, but for many reasons, I didn’t. One is that I want MY thoughts and words in this story. Another is that, although it takes me a lot longer, I enjoy the journey of finding the right words, constructing the sentences, and creating a flow to convey my thoughts as clearly as I can. And another is that I’m convinced people are more willing to read words written by a human, even if they are more flawed than what a machine could right … wait … scratch that … write, because what is written by the human is truly original thoughts, not just notes pulled from a database.

There is a movement in the art community promoting “No AI,” and they have logos people can put on their art to let it be known that the product you are viewing was 100 percent man made. And as much as I would LOVE to promote that on my site and in my work, I come back to the point of this article: there are two sides to this AI issue, and they aren’t both “bad.”

Dr. Jekyll has so many non-generative tools that help me work faster, without creating content that’s not real. Whether it’s the many photoshop tools mentioned above or something as simple as AI driven spell and grammar check in various applications, there are great AI tools that are harmless, in my opinion. I placed some photos in a PowerPoint presentation the other day and the application automatically wrote the alt text for sight impaired readers, using AI…and it was accurate (I would, of course, proof it all before releasing it to the public).

Mr. Hyde Can Be Dangerous

It’s Mr. Hyde who’s the problem, in my mind. Changing imagery and writing stories automatically can be dangerous, I think. And as more papers are adding video to their websites, let’s not forget the equal dangers of manipulating them. There are tools available now that can eliminate the “umms” and the other empty words from video automatically. Is that simply editing or changing the story? What about having the same power of removing or adding things in a video that we have in Photoshop?

So what is your line in the sand going to be? What are you going to allow AI / Generative AI to do, and not do? I think it’s a decision we all need to make… sooner, than later.

I’ve drawn my line. It’s the same line I’ve used since I first started using Photoshop thirty years ago. I won’t do anything to a photo I couldn’t do in the darkroom. Granted, I didn’t have the amazing masking tools back then, but even with Dr. Jekyll’s selection tools, I’m only going to adjust light and color, etc., much like I did with burning and dodging in the old days. As for images for advertising, that’s a totally different discussion we can have another day.

It’s Time to Draw Our AI Line in the Sand

I recommend newspapers establish their AI / Generative AI rules as soon as possible. Perhaps talk with other newspapers, and your state associations, to get their takes on this to help you decide. Feel free to drop me a line, if you want. 

And be specific, not general. “Do not use ChatGBT to write your story,” “Use of ChatGBT is allowed to make a story you’ve written shorter in length, provided you proof the final story again before submitting,” “Under no circumstances are you allowed to use Generative Cropping on a news photo,” would be examples. This is not a time for ambiguity. 

Then, and this is so important: Make the rules very clear to everyone involved with the creation of your newspaper, including stringers, freelancers, and part time employees. Put your rules in the employee handbook. Make a poster and put it over every computer so there is no question what is allowed, and not. With the high turnover in the news and production departments these days, this makes even the newcomers aware of the rules on day one.

And finally, evaluate your rules regularly. The AI world is changing so rapidly. And not just in the number of players who are entering the field with various tools, but the existing products are improving every day due to programming, what information is fed into the data pool, and the fact that AI is actually getting smarter. So keep an eye out for what’s new and what’s changing, and how those affect the way you put out a newspaper. 

If necessary, change and adapt your rules in keeping with current technology. Then update all of your employees and freelancers, change the employee handbook, and change the posters hanging around the office.

I wonder if ChatGBT had written this article, would it simply have read “Make your rules about AI, post your rules, evaluate often, adapt your rules if necessary, repeat?”

I still remember the first time I showed Photoshop’s Clone Stamp Tool to a newspaper audience. It was 1997 and it was called the Rubber Stamp Tool back then. The room was silent in amazement as I eliminated various marks, spots, and objects from the image. I think I removed a hot air balloon from the sky in a photo, as well.

Many people have come up to me over the years showing how they have used this tool to “fix” photos as well as share stories of how people have abused the power of that simple technique, only to lose their jobs.

Back then, the line in the sand, for news photos, anyway, was that you could clean up a photo, like removing scratches and spots, etc., but no manipulation. None. Stories of people adding basketballs to photos to make the shots more exciting, or removing the oxygen hose from a town mayor’s face come to mind, as well as too many others. People lost jobs over this type of manipulation of photos. I’ll bet you have stories of your own.

And now we have generative artificial intelligence (AI) built into Photoshop, which gives any user the simple-to-use-tools that can manipulate a photo way beyond anything we dreamed of doing with the Clone Stamp Tool. Want to remove things? Click. Want to add things? Type and click. BOOM…you have the photo you meant to take.

Yeah, it’s pretty amazing, and scary, at the same time.

And Photoshop’s new Generative Crop Tool? I shake my head thinking about the implications. This new tool allows the user to crop beyond the edges of the photo and, with a simple click of a button, the blank area around the image will be filled with what Photoshop thinks should be there…in seconds. I’ve used it in testing. I’ve taken square photos and added width to make it a landscape. Obviously, what’s added doesn’t really exist. Adobe’s generative AI brain, Firefly, is creating what it thinks would look natural and believable. 

AI vs. Generative AI

Notice I’ve been calling it “generative AI?” It matters, as this is the Mr. Hyde to the Dr. Jekyll of AI we’ve had in Photoshop, and many other applications, for years now.

Mild mannered Dr. Jekyll’s AI, in our graphic arts world, is rather harmless, as it does a particular task that we can do ourselves, but it has a trained mind behind it that can help us identify, and fix things, quicker. For example, all of Adobe’s photo adjustment tools have new masking capabilities that will quickly identify the subject of a photo, or the sky, or even people. In the case of people, it not only identifies them, it can identify the various people in the photo, then allow you to select just faces, or eyes, or lips, or facial hair, and more.

I can do this exact same thing with my Lasso Tool, or maybe Magic Wand Tool, or even the Quick Select Tool, or many other ways, but it would take a lot longer…and a steady hand. 

This technology makes a selection based on information it has been taught in order to be able to identify what a sky looks like, or people, etc. But it’s not changing anything. It’s merely selecting it so we can then make some adjustments to lighting, color values, etc…things we’ve done for decades, just not with the help of Dr. Jekyll’s ability to select things for us.

There are many other AI features I would lump in this category. Things like JPEG Artifact Removal, Photo Restoration, Depth Blur, Super Zoom, and others. These are all under Filter> Neural Filters, which we’ve had for several versions of the Creative Cloud. They are run by Adobe’s Sensei AI brain, which, unlike the Mr. Hyde of Firefly, focuses on functionalities.

The line between AI and generative AI gets a little blurry at times, however, as there are tools in the Neural Filters panel that are, in my mind, generative, in that they literally change the picture, manipulating it to tell a different story. Smart Portrait is a perfect example of how a user is a few clicks away from literally changing the photo completely from what was shot with the camera.

To Use, or Not to Use Generative AI

Now, I’m not in a position to tell you to use, or not to use these tools or where you draw your line in the sand as to what’s allowed, and not. I can tell you that for my photos, and my art, I don’t let Mr. Hyde anywhere near my work. I will not use generative AI to change or manipulate my photos in any way. On the other hand, I rely on Dr. Jekyll’s AI tools to help me make selections faster so then I can make the same lighting and color adjustments I’ve been making for years, only faster. 

Keep in mind that Adobe’s not the only player in this game. In fact, they are a little late to the game, compared to MidJourney and other tools that can generate art just by typing a few key words into the prompt. There are generative AI tools for music and video creation, and much more. It’s really hard to keep track of because, as I write this article, things are already changing. 

And it’s not just visuals. Mr. Hyde now has tools for writing text, in the form of ChatGBT and many others. ChatGBT is only a year old and it’s had an incredible impact on how the world creates content.

I probably could have let ChatGBT write this article for me, but for many reasons, I didn’t. One is that I want MY thoughts and words in this story. Another is that, although it takes me a lot longer, I enjoy the journey of finding the right words, constructing the sentences, and creating a flow to convey my thoughts as clearly as I can. And another is that I’m convinced people are more willing to read words written by a human, even if they are more flawed than what a machine could right … wait … scratch that … write, because what is written by the human is truly original thoughts, not just notes pulled from a database.

There is a movement in the art community promoting “No AI,” and they have logos people can put on their art to let it be known that the product you are viewing was 100 percent man made. And as much as I would LOVE to promote that on my site and in my work, I come back to the point of this article: there are two sides to this AI issue, and they aren’t both “bad.”

Dr. Jekyll has so many non-generative tools that help me work faster, without creating content that’s not real. Whether it’s the many photoshop tools mentioned above or something as simple as AI driven spell and grammar check in various applications, there are great AI tools that are harmless, in my opinion. I placed some photos in a PowerPoint presentation the other day and the application automatically wrote the alt text for sight impaired readers, using AI…and it was accurate (I would, of course, proof it all before releasing it to the public).

Mr. Hyde Can Be Dangerous

It’s Mr. Hyde who’s the problem, in my mind. Changing imagery and writing stories automatically can be dangerous, I think. And as more papers are adding video to their websites, let’s not forget the equal dangers of manipulating them. There are tools available now that can eliminate the “umms” and the other empty words from video automatically. Is that simply editing or changing the story? What about having the same power of removing or adding things in a video that we have in Photoshop?

So what is your line in the sand going to be? What are you going to allow AI / Generative AI to do, and not do? I think it’s a decision we all need to make… sooner, than later.

I’ve drawn my line. It’s the same line I’ve used since I first started using Photoshop thirty years ago. I won’t do anything to a photo I couldn’t do in the darkroom. Granted, I didn’t have the amazing masking tools back then, but even with Dr. Jekyll’s selection tools, I’m only going to adjust light and color, etc., much like I did with burning and dodging in the old days. As for images for advertising, that’s a totally different discussion we can have another day.

It’s Time to Draw Our AI Line in the Sand

I recommend newspapers establish their AI / Generative AI rules as soon as possible. Perhaps talk with other newspapers, and your state associations, to get their takes on this to help you decide. Feel free to drop me a line, if you want. 

And be specific, not general. “Do not use ChatGBT to write your story,” “Use of ChatGBT is allowed to make a story you’ve written shorter in length, provided you proof the final story again before submitting,” “Under no circumstances are you allowed to use Generative Cropping on a news photo,” would be examples. This is not a time for ambiguity. 

Then, and this is so important: Make the rules very clear to everyone involved with the creation of your newspaper, including stringers, freelancers, and part time employees. Put your rules in the employee handbook. Make a poster and put it over every computer so there is no question what is allowed, and not. With the high turnover in the news and production departments these days, this makes even the newcomers aware of the rules on day one.

And finally, evaluate your rules regularly. The AI world is changing so rapidly. And not just in the number of players who are entering the field with various tools, but the existing products are improving every day due to programming, what information is fed into the data pool, and the fact that AI is actually getting smarter. So keep an eye out for what’s new and what’s changing, and how those affect the way you put out a newspaper. 

If necessary, change and adapt your rules in keeping with current technology. Then update all of your employees and freelancers, change the employee handbook, and change the posters hanging around the office.

I wonder if ChatGBT had written this article, would it simply have read “Make your rules about AI, post your rules, evaluate often, adapt your rules if necessary, repeat?”

Author

Russell Viers

Third Chair Trumpet

I'm just a guy who was lucky to have made MANY mistakes creating files since 1987...and learning from those mistakes. Always trying to find a better way, I've learned the techniques you see in these videos on real projects over 35 years (plus many more doing paste-up).

0 Comments

Active Here: 0
Be the first to leave a comment.
Loading
Someone is typing
No Name
This is the actual comment. It's can be long or short. And must contain only text information.
(Edited)
Your comment will appear once approved by a moderator.
First Name
This is the actual comment. It's can be long or short. And must contain only text information.
(Edited)
Your comment will appear once approved by a moderator.
2 years ago
0
0
Load More
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Load More

More from the Blog

Illustrator Kindergarten

Welcome Students. I hope you brought your Uhu Sticks.

Our first complete Series is in the can and we’re well on our way recording and editing others. Illustrator Kindergarten is the perfect Series to get you started creating fun art in Adobe Illustrator.

Russel Viers
&
Julia Viers
Dec 6, 2021
In the bathroom: 4 mins.

Is it That Time of Year Again? Another Adobe CC Upgrade?

So What's New in CC 2022?

Once again it's time to see what the Adobe Gods have given us in a new release of Creative Cloud, this time CC 2022. It's fun, frustrating, inspiring, and perplexing.

Russel Viers
&
Julia Viers
Dec 9, 2021
7.25 mins.

The Bride of Digiversity.tv

Let's Make Some Training Videos...Take 2

Here we go again! But this time bigger, better, more content, more to teach, better technology, and a team of experts working hard to make this happen. Welcome to the rebirth of Digiversity.tv.

Russel Viers
&
Julia Viers
Jan 20, 2022
Sleepy: 10 mins.

Subscribe to the Digiversity.tv Newsletter

Get the latest from Digiversity.tv delivered straight to your inbox.

© 2021 – 2024  Digiversity.tv, All rights reserved.

Hey, Subscribe to our   Newsletter  today!

Subscribe