AI and Automation Archives - PR Daily https://www.prdaily.com/category/ai-and-automation/ PR Daily - News for PR professionals Mon, 25 Mar 2024 19:34:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 PR pros must prepare for the rise of AI journalism https://www.prdaily.com/pr-pros-must-prepare-for-the-rise-of-ai-journalism/ https://www.prdaily.com/pr-pros-must-prepare-for-the-rise-of-ai-journalism/#respond Wed, 27 Mar 2024 11:00:51 +0000 https://www.prdaily.com/?p=342500 It’s going to create serious challenges for PR pros. Sean O’Leary is vice president at Susan Davis International. Everything about the site looked legitimate. The reporter had a headshot. The article properly shared the news. But the use of one word gave away the fact it was all generated by artificial intelligence. Earlier this year, […]

The post PR pros must prepare for the rise of AI journalism appeared first on PR Daily.

]]>
It’s going to create serious challenges for PR pros.

Sean O’Leary is vice president at Susan Davis International.

Everything about the site looked legitimate.

The reporter had a headshot. The article properly shared the news. But the use of one word gave away the fact it was all generated by artificial intelligence.

Earlier this year, our agency sent out a press release for a client about three new leaders joining the company. As we reviewed the news clips, a new site popped up in our results. We hadn’t heard about the site and were initially excited.

Then we read the lede. The company had not hired a “trio” of new leaders – it had hired a “trinity” of new leaders. There’s not a human reporter alive who would ever refer to three new business leaders like that.

Indeed, it wasn’t a human reporter. Everything about the article was AI-generated, including the “headshot” of the “person” who “wrote” the story.

 

 

The phenomenon of AI-generated news is not new, as evident by the Sports Illustrated scandal late last year when the once-revered outlet was reduced to publishing AI-generated articles and attempting to sneak it past an unsuspecting public.

Most would agree that journalistic best practices would indicate a proper news outlet should make the reader aware if AI was responsible for the article they’re reading. But what if the entire outlet is AI-generated?

For PR professionals, we almost always want to expand the media footprint for our clients, and more sources of coverage are good. For this particular article in question, there was nothing wrong with it, other than the bizarre use of the word trinity. It showed up in Google News. It showed up in our media monitoring. There was nothing negative.

On one hand, I should be happy as a PR professional. We got an extra article for a client that was delivered to people around the world. For a majority of the general public, they do not know they’re reading an AI-generated article.

On the other hand, there’s a helpless feeling. An AI-generated news story can be good, but what if it’s bad? What if it starts needlessly sharing incorrect or unfavorable coverage to the masses?

As we enter the AI age of news media, here are a couple of tips for PR professionals.

Educate your clients on the AI media landscape

Even the savviest communication leader can be fooled by a strong AI-generated article. The first step in approaching AI-generated news is to educate everyone involved about what’s going on. Although they may be aware of AI news articles, they may not have experienced one personally.

For most AI-generated news, there is no action item beyond education. An article in these publications does not register on the same level as a legitimate, established outlet, but the average person reading these articles may not know that. As long as the news is correct, it’s simply bonus coverage.

Review every AI-generated article

However, just because one AI-generated article was good does not mean they all will be. While it’s always best practice to review articles to ensure your client’s news is presented factually and correctly, it’s even more critical with AI articles.

One such instance happened last fall, when an AI-generated news article popped up about a client’s annual sustainability report. Unfortunately, the AI-generated article published a story on the 2022 annual report as if it were released in 2023.

This was not an easy correction, as AI reporters are notoriously hard to track down. Instead, our team had to reach out to multiple salespeople at the site until finally reaching a human being who could remove the article completely. Ultimately, we were successful and there was little to no impact of the false article – but it was a warning sign.

Stay current with AI trends

By the time you read this article, there might be a new AI trend emerging in journalism. We’re only starting to scratch the surface of generative AI, with altered photos impacting Presidential campaigns and the most famous pop star on Earth.

There will be more AI-generated news sites, more AI-generated news articles, and more AI-generated news reporters. That much, I know. The rest? I’m not sure.

AI has the potential to completely upend and disrupt the news media. For public relations, that means our industry could be upended and disrupted too.

We can’t predict the future of AI. We can be prepared.

The post PR pros must prepare for the rise of AI journalism appeared first on PR Daily.

]]>
https://www.prdaily.com/pr-pros-must-prepare-for-the-rise-of-ai-journalism/feed/ 0
3 ways AI assists internal communications https://www.prdaily.com/3-ways-ai-assists-internal-communications/ https://www.prdaily.com/3-ways-ai-assists-internal-communications/#respond Tue, 19 Mar 2024 08:00:15 +0000 https://www.prdaily.com/?p=342335 Empowering, not replacing, corporate communicators with AI. Artificial intelligence (AI) is hyped to become a transformative force across all industries. According to Next Move Strategy Consulting, the global AI market was valued at $95.6 billion in 2021 and is predicted to grow with a 32.9% compound annual growth rate to reach $1.85 trillion by 2030. […]

The post 3 ways AI assists internal communications appeared first on PR Daily.

]]>
Empowering, not replacing, corporate communicators with AI.

Artificial intelligence (AI) is hyped to become a transformative force across all industries. According to Next Move Strategy Consulting, the global AI market was valued at $95.6 billion in 2021 and is predicted to grow with a 32.9% compound annual growth rate to reach $1.85 trillion by 2030.

As the use of AI expands, it has the potential to revolutionize HR and corporate communications by linking data with content, but we must use it responsibly. According to Top Trends in Privacy Driving Your Business Through 2024, a report by Gartner®, “By 2025, regulations will necessitate focus on AI ethics, transparency and privacy, which will stimulate — instead of stifling — trust, growth and better functioning of AI around the world.” Let’s explore what this might look like.

First, recognize AI for what it is: Artificial

While AI output is fascinating at this stage, remember that it’s only as good as its inputs. AI rehashes and rewrites existing content, just a bit more cleverly than traditional plagiarism. While people currently use AI to write news stories, concerns about job displacement are not unfounded. However, it’s essential to recognize that AI’s contribution to corporate communications is much more nuanced than merely replacing human writers. Yes, AI can quickly generate text, yet the output is limited to the quality and integrity of the sources it has processed. Rather than replacing human writers, AI is more likely to become a time-saving assistant, allowing communicators to gain insights from data and focus more on strategy and creativity.

How will AI assist employee communications?

  1. Use communications analytics data to inform content strategy. One of the significant challenges in corporate communications is understanding message uptake. Communications analytics data like PoliteMail’s Benchmark Report reveals that employees are willing to spend about a minute with an average email, with the highest engagement observed in messages of just thirty seconds or less to read. It won’t be long before AI makes this type of data analysis available as real-time recommendations, with variable tuning based on the message content and intended audience. Internal comms and HR teams may leverage AI tools as an editor to quickly condense lengthy content into more concise, reader-friendly message summaries. For example, internal comms could ask an AI tool to take a Teams meeting transcript and produce a bullet list summary for broadcast distribution.
  1. Optimize communications for higher engagement. AI excels at pattern matching and machine learning. So, when teams apply these tools to content analysis and communications metrics, they can enhance both assets’ value. Effective communicators possess strong intuition and language skills, and adding data-driven insights to evaluate the impact of their work will expand their reach and improve desired outcomes. For example, PoliteMail provides an AI-driven subject line suggester trained on attention rate data. Based on past performance, the tool suggests subjects likely to garner more attention. The communicator provides the content and ideas — what are we communicating and why — and AI helps optimize the how and the word choice.
  1. Maintain a consistent brand voice. Beyond visual brand guidelines that define a company’s logo, font, and colors, corporate communications teams seek to maintain a consistent brand voice (the company’s style, attitude and tone). With its ability to learn patterns, AI can help a diverse team of writers execute a more consistent brand voice by mimicking a specific fashion, point of view and character. By training AI to edit content to align with an organization’s defined brand voice, communicators can ensure a cohesive identity. An organization could train an AI on its brand voice by inputting its current collateral library that fits the brand voice. Some have seen tools like ChatGPT accomplish this when prompted to rewrite a speech in the style of Teddy Roosevelt or write a story in the style of Mark Twain.

Say Hi to AI

While AI is a powerful up-and-coming tool, companies should view it as a collaborative partner rather than a replacement for human intelligence. Leveraged responsibly, AI can help streamline content production and provide valuable data-driven insights that help comms teams produce more engaging content. Used strategically, AI can elevate corporate comms by strengthening content strategy, optimizing communications for reach, readership and engagement, and defining and maintaining a robust and consistent brand voice.

The post 3 ways AI assists internal communications appeared first on PR Daily.

]]>
https://www.prdaily.com/3-ways-ai-assists-internal-communications/feed/ 0
Generative AI is making us hanker for human interaction https://www.prdaily.com/generative-ai-is-making-us-hanker-for-human-interaction/ https://www.prdaily.com/generative-ai-is-making-us-hanker-for-human-interaction/#respond Mon, 18 Mar 2024 15:08:40 +0000 https://www.prdaily.com/?p=342382 Ragan and PR Daily’s CEO reflects on lessons learned from SXSW. Despite its name, South by Southwest is not easy to navigate. But getting lost in the thousands of sessions, meetups, exhibits and concerts in Austin, Texas every March is much of its appeal. As I explored this year’s festival, I found myself at the […]

The post Generative AI is making us hanker for human interaction appeared first on PR Daily.

]]>
Ragan and PR Daily’s CEO reflects on lessons learned from SXSW.

Despite its name, South by Southwest is not easy to navigate. But getting lost in the thousands of sessions, meetups, exhibits and concerts in Austin, Texas every March is much of its appeal. As I explored this year’s festival, I found myself at the intersection of Contradiction and Promise.

Within the first few hours, I attended keynotes and panel discussions stuffed with paradoxes: AI is good; AI is bad; opportunity awaits you; the end is coming. If you’re a lifelong learner with an open mind, this type of discourse is like a bee to honey.

One session focused on interpersonal communication, social atrophy and the need for humans to be more civil. That’s a lot to take in, but workplace expert Amy Gallo reminded us of the multiplier effect that one good deed produces. Considering the political discourse in the U.S. this election year, her tips on how to work with difficult people seemed reasonable and achievable for attendees. (During tough conversations, she advised, “Always grant someone their premise.”)

Not too far down the hallway was a keynote about “Billion Dollar Teams” fueled by generative AI. Ian Beacraft, founder and chief futurist of Signal and Cipher, spoke optimistically about the pervasiveness of AI and a future where one person can run a billion-dollar company, thanks to AI. In a nod to Publicis Chief Growth Officer Rishad Tobaccowala, he reiterated that “The future does not fit in the containers of the past.”

People who need people

Beacraft shared future-of-work scenarios, such as the manager-employee meeting in which the manager is in AI form. In this possibly far-fetched scenario, your boss won’t need to show up for your check-in because their AI version will suffice. This technology may be coming to an office near you. How this impacts manager communications is something we might want to bake into the 2025 strategic comms plan.

Bleeding-edge technology like generative AI means fewer paper cuts and more time for strategic and satisfying work. The average employee spends 32 days a year searching for documents or information, said Beacraft. With AI, that time will be whittled down to hours. What will they be doing with that extra time, assuming they still have a job? Beacraft’s assertion was they’ll forge better social connections, and teams will be more efficient. “The small team is the ultimate flex,” he said.

 

 

There is undoubtedly a dark side to AI, just as there are with other technologies. Whether you’re a communicator, a teacher, a doctor or a lawyer, future teams will be built with AI and people in mind.  The good news, promised Beacraft, is that people will have more time for other people.

Lastly, I stumbled upon a standing-room-only session led by Noah Kagan, author of the new book “Million Dollar Weekend,” and founder of the wildly successful, entrepreneur-focused software marketplace AppSumo. Kagan extolled the virtues of hard work and grit and the power of that first dollar earned.

The room was full of what Kagan calls “wantrepreneurs” whose business ideas ranged from custom jewelry to a local hiking app. Promise permeated the room. There was no talk of AI, as Kagan focused on time-tested advice such as “Just Ask.”  Successful people seek help from the people around them — family, colleagues and friends.

As we integrate AI into our work lives, we’ll be doing this together, not alone. Just ask for help. We’ll be leaning into one another for insights and ways to make the workplace, our communities and the world more human.

These three SXSW sessions underscore the paradoxical new world we’re stepping into: We want to understand AI, to embrace not fear it. We want social connection — we know we need that to be whole. And we shouldn’t stop dreaming, even if we can’t stand up a million-dollar business in a weekend.

Diane Schwartz is CEO of Ragan Communications. 

The post Generative AI is making us hanker for human interaction appeared first on PR Daily.

]]>
https://www.prdaily.com/generative-ai-is-making-us-hanker-for-human-interaction/feed/ 0
How to use custom GPTs in your public relations practice https://www.prdaily.com/how-to-use-custom-gpts-in-your-public-relations-practice/ https://www.prdaily.com/how-to-use-custom-gpts-in-your-public-relations-practice/#respond Mon, 18 Mar 2024 10:00:19 +0000 https://www.prdaily.com/?p=342371 Go beyond the off-the-shelf solution and find even more value in generative AI. Maddie Knapp is a senior media relations strategist at Intero Digital Content & PR Division, formerly Influence & Co.  AI in public relations isn’t just changing the game — it’s completely rewriting the rules. In 2024, AI’s imprint on PR will be profound, offering tools […]

The post How to use custom GPTs in your public relations practice appeared first on PR Daily.

]]>
Go beyond the off-the-shelf solution and find even more value in generative AI.

Maddie Knapp is a senior media relations strategist at Intero Digital Content & PR Division, formerly Influence & Co. 

AI in public relations isn’t just changing the game — it’s completely rewriting the rules. In 2024, AI’s imprint on PR will be profound, offering tools and techniques that are reshaping how we pitch topics, monitor media, and build relationships. A 2023 Muck Rack survey found that 61% of PR pros were already using AI or planned to explore it.

Now, understanding how to use AI in public relations isn’t about technology taking the wheel; it’s about us driving smarter, with AI as our supercharged GPS.

Custom GPTs: Tailoring pitches with AI precision

Custom GPTs in PR are like having a secret weapon in your arsenal. These AI-driven tools are designed to create content that fits your exact needs, illustrating how to use AI in PR. Let’s break down their impact across four key pitch types:

  1. Expert pitches: Think of custom GPTs as your expert whisperer, crafting pitches that sound like they’ve come straight from the horse’s mouth. AI can analyze the latest trends, reports and expert articles, ensuring your pitch reflects current industry insights. It can even mimic the tone and style of industry leaders.

Imagine you’re preparing a pitch about sustainable energy. While you might not be the top expert in this field, you can still create an impactful and authoritative pitch. Start by gathering a diverse range of recent articles from reputable publications on sustainable energy. This collection should include various formats like opinion pieces, news reports, and interviews. Use ChatGPT to analyze these articles, focusing on their language, style, recurring themes, keywords and overall tone. The goal here is to understand how sustainable energy topics are typically presented and discussed in your target industry.

Then, train a custom GPT model using these insights. This model will learn to replicate the writing style, thematic focus and tone observed in your research. By doing so, your custom GPT can generate pitch drafts that resonate with the style and substance of existing industry content. Your pitches will be more aligned with industry standards and targeted to your audience. This strategy enhances your efficiency, allowing you to focus on refining and personalizing your pitch rather than starting from scratch.

 

 

  1. Data-driven stories: AI’s ability to process and analyze large datasets is unparalleled, helping to identify compelling patterns and trends and turning them into narratives that are both informative and captivating. These stories can be used to back up claims with hard data, making your pitches more credible and authoritative.

And by analyzing a journalist’s past work and preferences, AI can tailor pitches that resonate on a personal level. It moves beyond throwing darts in the dark to using a guided missile that hits the bullseye of relevance and engagement. Personalized pitches cut through the noise, increasing the likelihood of your story being picked up.

Let’s say you are leading a supply chain company’s PR. To make an impact, start by gathering all kinds of related data. You want the nitty-gritty on how global supply chain hiccups are playing out, how the company is excelling in logistics, and what customers are saying. Next, use AI to sift through the data and unearth nuggets that prove that the company is outperforming the chaos better than the competition. With these insights, craft a pitch that’ll make a big splash, showing off stats and numbers that back up your claims. This approach does more than tell people that the company is top-notch; it shows them.

  1. Announcements:Need to make a quick announcement? AI’s efficiency ensures your news hits the mark, fast. Whether it’s a corporate update, an important event, or a crisis response, AI can quickly digest the necessary information and produce clear, concise and impactful announcements.

Imagine you’re the PR strategist for a tech company that is about to announce a huge partnership with another organization that will optimize technical systems. The main goal is to spread the word in a way that is catchy, fun and gets everyone buzzing. Use AI to take a deep dive into the latest market trends, competitor news and past successful announcements. Next, give AI all the exciting details about the partnership so it can craft an engaging announcement that nails all the key points while keeping it on-brand.

AI can take the messaging one step further. You know the message needs to hit different notes for different folks. AI can help you spin your message a few ways — more in-depth for the technology crowd and more formal for the business audience. The message ends up sharp, adaptable and in sync with your target audiences.

 

  1. Influencer collaborations: The best part about using custom GPTs is that the more information and feedback you give them, the better they perform and the more personalized they become. And personalization is where AI really flexes its muscles. A custom GPT adapts to different influencer styles, making sure your brand message harmonizes with their content, style and audience engagement. This ensures that your message is not only consistent with your brand, but also resonates with the influencer’s followers.

Let’s say you are a PR professional working for a healthcare company that just released a wellness app. The goal is to drive consumer interest through the influencer market. After identifying a diverse group of influencers, AI can analyze the content and engagement style of each influencer, understanding what kind of messages resonate best for their followers. Using GPT insights, you can craft customized content for each influencer’s style and the specific media platform. As the campaign evolves, GPT monitors performance and suggests adjustments to maintain relevance. The approach ensures the wellness app’s message resonates with diverse audiences and enhances engagement across platforms.

Incorporating AI and PR strategies means striking a balance between automated efficiency and human ingenuity. AI can assist in heavy lifting, allowing us to focus on strategic aspects. This year, let’s harness AI to make our content more efficient, impactful, emotionally resonant, and ethically sound. That’s a narrative we can all get behind.

 

The post How to use custom GPTs in your public relations practice appeared first on PR Daily.

]]>
https://www.prdaily.com/how-to-use-custom-gpts-in-your-public-relations-practice/feed/ 0
Revolutionize the Employee Experience With an AI-powered EX Platform from Simpplr https://www.prdaily.com/revolutionize-the-employee-experience-with-an-ai-powered-ex-platform-from-simpplr/ https://www.prdaily.com/revolutionize-the-employee-experience-with-an-ai-powered-ex-platform-from-simpplr/#respond Fri, 15 Mar 2024 10:00:39 +0000 https://www.prdaily.com/?p=342362 The future of work is one in which the employee experience and artificial intelligence meet to reshape the fabric of organizational dynamics.

The post Revolutionize the Employee Experience With an AI-powered EX Platform from Simpplr appeared first on PR Daily.

]]>
The future of work is one in which the employee experience and artificial intelligence meet to reshape the fabric of organizational dynamics.

The post Revolutionize the Employee Experience With an AI-powered EX Platform from Simpplr appeared first on PR Daily.

]]>
https://www.prdaily.com/revolutionize-the-employee-experience-with-an-ai-powered-ex-platform-from-simpplr/feed/ 0
After the Princess Catherine photo disaster, have this conversation with your clients https://www.prdaily.com/princess-catherine-photo-disaster-have-this-conversation-with-your-clients/ https://www.prdaily.com/princess-catherine-photo-disaster-have-this-conversation-with-your-clients/#respond Thu, 14 Mar 2024 15:01:13 +0000 https://www.prdaily.com/?p=342349 It’s time to come clean about photos. Gabriel De La Rosa Cols is a principal at Intelligent Relations. The recent release of a doctored photo of Princess Catherine, formerly known as Kate Middleton, and her family sparked widespread concern about the use of digital editing tools. But it also showed the ease with which conspiracy […]

The post After the Princess Catherine photo disaster, have this conversation with your clients appeared first on PR Daily.

]]>
It’s time to come clean about photos.

Gabriel De La Rosa Cols is a principal at Intelligent Relations.


The
recent release of a doctored photo of Princess Catherine, formerly known as Kate Middleton, and her family sparked widespread concern about the use of digital editing tools. But it also showed the ease with which conspiracy firestorms can arise from seemingly innocuous actions. 

If you haven’t heard the story, several news agencies, including the Associated Press and Reuters, recently shared a family photograph of Catherine, Princess of Wales. The photo seemed to have been intended to prevent any more speculation over her health after she went virtually missing from the public eye since her abdominal surgery two months ago. However, those news agencies later retracted the photo and reported that there was evidence of photo manipulation. 

Since then, a number of conspiracy theories have emerged about the state of the British Royal Family, Catherine’s health and pretty much anything else trolls can think of. And an apology from Princess Catherine insisting that the edits were merely the result of her own amateur efforts did nothing to stop the conspiracy storm. 

So what happened here? And what should public relations professionals do to prevent similar incidents for their clients in a time when photo manipulation and AI-generated images are making it harder to know what is real or fake? 

 

 

Loss of credibility

Clearly, media entities that are rightly focused on maintaining their credibility won’t stand for doctored images or anything else that might indicate a lack of honesty. The moment various publications realized the photo was doctored, not only did they retract it, but the photo immediately became a fantastic example of what not to do if you want a good relationship with the media. 

Even after the retractions, the image did plenty of damage: the same media entities that published the image were beset by conspiracy theorists. As a result, some media companies have watched their credibility lose ground to unscrupulous actors who just want to foment rumors. 

This should matter to us because part of our job as public relations professionals is maintaining good relationships with media entities and journalists. We’re here not only to help our clients, but also to make sure the stories we ask media personnel to promote are credible and won’t hurt their reputations. Needless to say, if we fail in that mission and it results in a similar PR fiasco because of an edited or AI-generated image or a false story, we’re going to have a very hard time convincing the same reporter or publication to view our client as a source in the future. 

Basically, we need to avoid anything that looks like a lack of transparency on our part. The problem with Princess Catherine’s picture wasn’t that an amateur photographer decided to touch up a photo. The problem was the appearance of dishonesty, and that’s something that will really hurt the public image of any brand or famous spokesperson involved. Unfortunately, these types of incidents will continue to happen as more public figures and brands place a greater emphasis on digital technologies and/or AI-generated images. 

Avoiding the firestorm

PR professionals are responsible for ensuring the authenticity of the content they send to the media on behalf of clients. Of course, you might not even be aware that your clients are using AI or editing to change images until some news station takes an issue with a photo you’ve sent over. But I think it’s fair to say that the Princess Catherine story at least gives you some leverage to open a discussion with your clients about the need to be cautious. 

For the most part, edited images or images made with AI can suggest that your client has something to hide. So even if your pictures are a bit grainy or outdated, that’s preferable to something that clearly reveals that it has been edited. 

You can also look for signs of an edited image yourself. In the case of the Princess Catherine image, there were clearly misaligned and missing objects. An AI-generated image often has smooth or blurry textures, colors, or unnatural lighting. If you discover an image has been edited or could give the impression of being edited or AI-generated, you might need to ask for a new image you can share. 

Remember, if this is a mistake a high-powered PR team meant to protect royalty from criticism can make, then it’s also a mistake any of us could fall into. If you suspect your client may have doctored a photo, make sure not to send it out to the media. And maybe sit down and discuss how recent events have shown it’s just better to be transparent from the start. Your clients will thank you in the end.

The post After the Princess Catherine photo disaster, have this conversation with your clients appeared first on PR Daily.

]]>
https://www.prdaily.com/princess-catherine-photo-disaster-have-this-conversation-with-your-clients/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/#respond Thu, 14 Mar 2024 09:00:55 +0000 https://www.prdaily.com/?p=342341 From risks to regulation, what you need to know this week.  AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories. Here’s what communicators need to know.  AI risks and regulation As […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
From risks to regulation, what you need to know this week. 


AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories.

Here’s what communicators need to know. 


AI risks and regulation

As always, new and recurring risks continue to emerge around the implementation of AI. Hence, the push for global regulation continues.

Consumers overwhelmingly support federal AI regulation, too, according to a new survey from HarrisX. “Strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such,” reads the exclusive feature in Variety

But is the U.S. government best equipped to lead on regulation? On Wednesday, the European Parliament approved a landmark law that its announcement claims  “ensures safety and compliance with fundamental rights, while boosting innovation.” It is expected to take effect this May.

The law includes new rules banning applications that threaten citizen rights, such as biometric systems collecting sensitive data to create facial recognition databases (with some exceptions for law enforcement). It also requires clear obligations for high-risk AI systems that include “critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management,” and  “justice and democratic processes,” according to the EU Parliament.

The law will also require general-purpose AI systems and the models they are based on to meet transparency requirements in compliance with EU copyright law and publishing, which will include detailed summaries of the content used for training. Manipulated images, audio and video will need to be labeled.

CNBC reports:

Dragos Tudorache, a lawmaker who oversaw EU negotiations on the agreement, hailed the deal, but noted the biggest hurdle remains implementation.

“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” Tudorache said on social media on Tuesday.

“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added. 

Legal professionals described the act as a major milestone for international artificial intelligence regulation, noting it could pave the path for other countries to follow suit.

Last week, the bloc brought into force landmark competition legislation set to rein in U.S. giants. Under the Digital Markets Act, the EU can crack down on anti-competitive practices from major tech companies and force them to open out their services in sectors where their dominant position has stifled smaller players and choked freedom of choice for users. Six firms — U.S. titans Alphabet, Amazon, Apple, Meta, Microsoft and China’s ByteDance — have been put on notice as so-called gatekeepers.

Communicators should pay close attention to U.S. compliance with the law in the coming months, diplomats reportedly worked behind the scenes to water down the legislation.

“European Union negotiators fear giving in to U.S. demands would fundamentally weaken the initiative,” reported Politico.

“For the treaty to have an effect worldwide, countries ‘have to accept that other countries have different standards and we have to agree on a common shared baseline — not just European but global,’” said  Thomas Schneider, the Swiss chairman of the committee.

If this global regulation dance sounds familiar, that’s because something similar happened when the EU adopted the General Data Protection Regulation (GDPR) in 2016, an unprecedented consumer privacy law that required cooperation from any company operating in a European market. That law influenced the creation of the California Consumer Privacy Act two years later. 

As we saw last week when the SEC approved new rules for emissions reporting, the U.S. can water down regulations below a global standard. It doesn’t mean, however, that communicators with global stakeholders aren’t beholden to global laws.

Expect more developments on this landmark regulation in the coming weeks.

As news of regulation dominates, we are reminded that risk still abounds. While AI chip manufacturer NVIDIA rides all-time market highs and earned coverage for its competitive employer brand, the company also finds itself in the crosshairs of a proposed class action copyright infringement lawsuit just like OpenAI did nearly a year ago. 

Authors Brian Keene, Abdi Nazemian and Steward O’Nan allege that their works were part of a datasite NVIDIA used to train its NeMo AI platform. 

QZ reports:

Part of the collection of works NeMo was trained on included a dataset of books from Bibliotik, a so-called “shadow library” that hosts and distributes unlicensed copyrighted material. That dataset was available until October 2023, when it was listed as defunct and “no longer accessible due to reported copyright infringement.”

The authors claim that the takedown is essentially Nvidia’s concession that it trained its NeMo models on the dataset, thereby infringing on their copyrights. They are seeking unspecified damages for people in the U.S. whose copyrighted works have been used to train Nemo’s large language models within the past three years.

“We respect the rights of all content creators and believe we created NeMo in full compliance with copyright law,” a Nvidia spokesperson said.

While this lawsuit is a timely reminder that course corrections can be framed as an admission of guilt in the larger public narrative,  the stakes are even higher.

A new report from Gladstone AI, commissioned by the State Department, consulted experts at several AI labs including OpenAI, Google DeepMind and Meta offers substantial recommendations for the national security risks posed by the technology. Chief among its concerns is what’s characterized as a “lax approach to safety” in the interest of not slowing down progress,  cybersecurity concerns and more.

Time reports:

The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.

On the ground level, Microsoft stepped up in blocking terms that generated violent, sexual imagery using Copilot after an engineer expressed their concerns to the FTC.

According to CNBC:

Prompts such as “pro choice,” “pro choce” [sic] and “four twenty,” which were each mentioned in CNBC’s investigation Wednesday, are now blocked, as well as the term “pro life.” There is also a warning about multiple policy violations leading to suspension from the tool, which CNBC had not encountered before Friday.

“This prompt has been blocked,” the Copilot warning alert states. “Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.”

This development is a reminder that AI platforms will increasingly put the onus on end users to follow evolving guidelines when we publish automated content. Whether you work within the capabilities of consumer-optimized GenAI tools or run your own, custom GPT, sweeping regulations to the AI industry are not a question of “if” but “when”.

Tools and use cases 

Walmart is seeking to cash in on the AI craze with pretty decent results, CNBC reports. Its current experiments surround becoming a one-stop destination for event planning. Rather than going to Walmart.com and typing in “paper cups,” “paper plates,” “fruit platter” and so on, the AI will generate a full list based on your needs – and of course, allow you to purchase it from Walmart. Some experts say this could be a threat to Google’s dominance, while others won’t go quite that far, but are still optimistic about its potential. Either way, it’s something for other retailers to watch.

Apple has been lagging other major tech players in the AI space. Its current biggest project is a laptop that touts its power for other AI applications, rather than developing its own. But FastCompany says that could change this summer when Apple rolls out its next operating systems, which are all but certain to include their own AI. 

FastCompany speculates that a project internally dubbed “AppleGPT” could revolutionize how voice assistant Siri works. It also may include an AI that lives on your device rather than in the cloud, which would be a major departure from other services. They’ll certainly make a splash if they can pull it off.

Meanwhile, Google’s Gemini rollout has been anything but smooth. Recently the company restricted queries related to upcoming global elections, The Guardian reported

A statement from Google’s India team reads: “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.” The Guardian says that even basic questions like “Who is Donald Trump?” or asking about when to vote give answers that point users back to Google searches. It’s another black eye for the Gemini rollout, which consistently mishandles controversial questions or simply sends people back to familiar, safe technology.

But then, venturing into the unknown has big risks. Nature reports that AI is already being used in a variety of research applications, including generating images to illustrate scientific papers. The problems arise when close oversight isn’t applied, as in the case of a truly bizarre image of rat genitalia with garbled, nonsense text overlaid on it. Worst of all, this was peer reviewed and published. It’s yet another reminder that these tools cannot be trusted on their own. They need close oversight to avoid big embarrassment. 

AI is also threatening another field, completely divorced from scientific research: YouTube creators. Business Insider notes that there is an exodus of YouTubers from the platform this year. Their reasons are varied: Some face backlash, some are seeing declining views and others are focusing on other areas, like stand-up comedy. But Business Insider says that AI-generated content swamping the video platform is at least partly to blame:


Experts believe if the trend continues, it may usher in a future where relatable and authentic friends people used to turn to the platform to watch are fewer and far between. Instead, replaced by a mixture of exceedingly high-end videos only the MrBeasts of the internet can reach and sub-par AI junk thrown together by bots and designed to meet our consumption habits with the least effort possible.

That sounds like a bleak future indeed – and one that can also change the available influencers available to partner on the platform.

But we are beginning to see some backlash against AI use, especially in creative fields. At SXSW, two filmmakers behind “Everything Everywhere All at Once” decried the technology. Daniel Scheinert warned against AI, saying: “And if someone tells you, there’s no side effect. (AI’s) totally great, ‘get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff.”

Thinking carefully about responsible AI use is something we can all get behind. 

AI at work

As the aforementioned tools promise new innovations that will shape the future of work, businesses continue to adjust their strategies in kind.

Thompson-Reuters CEO Steve Hasker told the Financial Times that the company has “tremendous financial firepower” to expand the business into AI-driven professional services and information ahead of selling the remainder of its holding to the London Stock Exchange Group (LSEG).

“We have dry powder of around $8 billion as a result of the cash-generative ability of our existing business, a very lightly levered balance sheet and the sell down of [our stake in] LSEG,” said Hasker. 

Thompson-Reuters has been on a two-year reorg journey to shift its services as a content provider into a “content-driven” tech company. It’s a timely reminder that now is the time to consider how AI fits not only into your internal use cases, but your business model. Testing tech and custom GPTs as “customer zero” internally can train your workforce and prepare a potentially exciting new product for market in one fell swoop. 

A recent WSJ feature goes into the cost-saving implications of using GenAI to integrate new corporate software systems, highlighting concerns that the contractors hired to implement these systems will see bottom-line savings through automation while charging companies the same rate. 

WSJ reports:

How generative AI efficiencies will affect pricing will continue to be hotly debated, said Bret Greenstein, data and AI leader at consulting firm PricewaterhouseCoopers. It could increase the cost, since projects done with AI are higher quality and faster to deliver. Or it could lead to lower costs as AI-enabled integrators compete to offer customers a better price.

Jim Fowler, chief technology officer at insurance and financial services company Nationwide, said the company is leaning on its own developers, who are now using GitHub Copilot, for more specialized tasks. The company’s contractor count is down 20% since mid-2023, in part because its own developers can now be more productive. Fowler said he is also finding that contractors are now more willing to negotiate on price.

Remember, profits and productivity are not necessarily one in the same. Fresh Axios research found workers in Western countries are embracing AI’s potential for productivity less than others – only 17 % of U.S. respondents and 20% of EU said that AI improved productivity. That’s a huge gap from the countries reporting higher productivity, including 67% of Indian respondents, 65% in Indonesia and 62% in the UAE.

Keeping up and staying productive will also require staying competitive in the global marketplace. No wonder the war for AI talent rages on in Europe.

“Riding the investment wave, a crop of foreign AI firms – including Canada’s Cohere and U.S.-based Anthropic and OpenAI – opened offices in Europe last year, adding to pressure on tech companies already trying to attract and retain talent in the region,” Reuters reported

AI is also creating new job opportunities. Adweek says that marketing roles involving AI are exploding, from the C-suite on down. Among other new uses:

Gen AI entails a new layer of complexity for brands, prompting people within both brands and agencies to grasp the benefits of technology, such as Sora, while assessing its risks and ethical implications.

Navigating this balance could give rise to various new roles within the next year, including ethicists, conversational marketing specialists with expertise in sophisticated chatbots, and data-informed strategists on the brand side, according to Jason Snyder, CTO of IPG agency Momentum Worldwide.

Additionally, Snyder anticipates the emergence of an agency integration specialist role within brands at the corporate level.

“If you’re running a big brand marketing program, you need someone who’s responsible for integrating AI into all aspects of the marketing program,” said Snyder. “[Now] I see this role in in bits and pieces all over the place. [Eventually], whoever owns the budget for the work that’s being done will be closely aligned with that agency integration specialist.”

As companies like DeepMind offer incentives such as restricted stock, domestic startups will continue to struggle with hiring top talent if their AI tech stack isn’t up to the standard of big players like NVIDIA.

“People don’t want to leave because when you don’t have anything when they have peers to work with, and when they already have a great experimentation stack and existing models to bootstrap from, for somebody to leave, it’s a lot of work,” Aravind Srinivas, the founder and CEO of Perplexity, told Business Insider, 

“You have to offer such amazing incentives and immediate availability of compute. And we’re not talking of small compute clusters here.”

Another reminder that building a competitive, attractive employer brand around your organization’s AI integrations should be on every communicator’s mind. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/feed/ 0
How IBM unlocks the heart of AI through brand experiences https://www.prdaily.com/how-ibm-unlocks-the-heart-of-ai-through-brand-experiences/ https://www.prdaily.com/how-ibm-unlocks-the-heart-of-ai-through-brand-experiences/#respond Wed, 13 Mar 2024 11:00:59 +0000 https://www.prdaily.com/?p=342318 Think AI is all flash and no substance? Marketing leaders at this year’s SXSW Festival share how to use the technology to embrace human emotion. Beki Winchel is senior director of content & engagement at Spiro. This year, it seems AI is all that any marketer can talk about. The trending technology has become more […]

The post How IBM unlocks the heart of AI through brand experiences appeared first on PR Daily.

]]>
Think AI is all flash and no substance? Marketing leaders at this year’s SXSW Festival share how to use the technology to embrace human emotion.

Beki Winchel is senior director of content & engagement at Spiro.

This year, it seems AI is all that any marketer can talk about.

The trending technology has become more accessible than ever, forcing brands to get on board—or get left behind.

As both excitement and fear over AI continue to swirl, IBM’s Program Director, Executive Programs & Event Experiences Erin McElroy, and Spiro’s Global Chief Marketing Officer Carley Faircloth uncovered opportunities for savvy brand marketers to use AI to strengthen relationships with their key audiences.

Here’s what you can glean from their conversation at Brand Innovators’ Leadership in Brand Marketing Summit at SXSW:

Using AI to evoke empathy

When you move past consuming to creating, real brand impact can happen.

“You can always tell when you’re watching a marketing campaign that’s really just trying to sell you and then you can tell when you really feel something — when you’re really moved,” McElroy said. “To truly get the value out of AI, we need to continue like we have with other technologies to be creators… We don’t want to be consumers; we want to be creators of AI.”

 

 

In a digital brand experience, IMB’s watsonx Assistant team used the company’s AI to simulate what it’s like to be a call center agent — which can be a frustrating experience for both the agent and the customer.

Using gameification, The Contact Center Challenge, placed users in the role of the customer service assistant. For the average participant, it took roughly 45 seconds for frustration to mount over the volume of requests coming at them.

That’s when the watsonx Assistant stepped in, automating some tasks with complex look-ups, natural language and self-serve answers. At the game’s end, participants were shown how they did without the AI, how much the assistant helped when they began drowning in requests, and how much easier it would have been to use AI from the beginning.

A recent McKinsey & Company report revealed that Gen AI can boost productivity in customer care functions by 30% to 40%. But it’s not just a benefit for the company: Gen AI increased issue resolution by 14% an hour and reduced handle time by 9%.

The research shows that automating tasks that don’t require the human touch enables contact center and customer service agents to focus on understanding customers and what they need.

Yet, the numbers don’t give you the feeling of mounting stress as you struggle to handle calls —or the sigh of relief when the tech streamlined tasks. IBM’s experience did. And that more powerfully cemented into B2B customers’ minds and hearts that AI can help customer service agents better focus on and delight customers.

McElroy shared that a COO from a large retail brand went through the experience, commenting that they didn’t understand the value with previous sales pitches of the technology. But after completing the game? “Now I get it,” the leader told McElroy.

“That just meant a lot to me in terms of the power of our experiences,” said McElroy. “We really have an opportunity as experiential marketers to help people understand what’s going on…people remember that, more than a great pitch.”

Experientializing tech to cultivate emotion and relationships

“Whenever we get a brief around a particular novel technology, the challenge for us on [the agency side] is really experientializing that technology in a way that will break through,” said Faircloth. “At the end of the day, it’s about the experience.”

McElroy agreed, adding that it comes down to what you as a brand marketer are trying to accomplish and how it meets your business goals.

“We become more effective marketers and more effective businesspeople when we step back and look at what is truly going to bring value — not just sell, promote or perform,” McElroy said. “The things that perform the best are the things have heart and have that authenticity built into it.”

Faircloth pointed out that it’s all about intention when it comes to the authenticity in your approach.

“It really is about knowing the audience,” Faircloth said. “It’s something we as marketers all strive for and talk about. We do a lot of work understanding the psychographics and demographics—all the things that play into it. We all have to tap into the mind of our consumer.”

McElroy said that understanding your audience is a crucial fundamental for effective AI efforts.

“What is the why behind what you’re doing?” she asked. “When you inform your strategy that way, then the tools start to work for you.”

Moving from AI experimentation to practical brand application

There’s another crucial element to using AI effectively in brand experiences and marketing campaigns: Integration across your organization.

“Surrounding AI, there’s a lot of talk a lot about benefits, risks, data and the whole “garbage in/garbage aspect,” Faircloth said. “What we don’t hear a lot about is organizational preparedness — from experimental, to competent, to adoption, to integration.”

That is often a sticking point, turning the tech from a flash in the pan to a consistent strategy and way of working adopted throughout the organization. How can you move forward on the path of AI integration, regardless of the industry you’re in?

McElroy says it all comes down to being intentional with your strategy and understanding what kind of data you want to capture.

“I think a best practice, no matter what platform you use, is one that will let you own your data because it becomes your intellectual property,” Erin said.

She continued: “This allows you to be a value creator with AI, because you’re taking your own enterprise data and putting it together with the data your [AI] platform offers to create something new.”

It’s important to point out that though data is paramount, it won’t replace marketers actively tuning into and understanding audience behavior shifts.

“That to me is the winning proposition,” McElroy said. “When you’re listening, you end up with a good result.”

The post How IBM unlocks the heart of AI through brand experiences appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ibm-unlocks-the-heart-of-ai-through-brand-experiences/feed/ 0
How Microsoft is using AI for measurement for its own comms https://www.prdaily.com/how-microsoft-is-using-ai-for-measurement-for-its-own-comms/ https://www.prdaily.com/how-microsoft-is-using-ai-for-measurement-for-its-own-comms/#respond Wed, 06 Mar 2024 12:00:53 +0000 https://www.prdaily.com/?p=342248 From hindsight to foresight. Microsoft is running at the head of the AI craze. From its partnership with OpenAI to its Copilot tool to Azure AI and more, the tech giant is putting out new tools by the day to help people and organizations take advantage of generative AI. But how is Microsoft’s own communications […]

The post How Microsoft is using AI for measurement for its own comms appeared first on PR Daily.

]]>
From hindsight to foresight.

Microsoft is running at the head of the AI craze. From its partnership with OpenAI to its Copilot tool to Azure AI and more, the tech giant is putting out new tools by the day to help people and organizations take advantage of generative AI.

But how is Microsoft’s own communications team using AI in their day-to-day work?

We got a bit of insight during PR Daily’s recent Public Affairs & Speechwriting Virtual Conference, when Microsoft VP of Public Affairs Brent Colburn revealed several ways his own department is using AI, from measurement to media relations.

Using AI for measurement

“Traditionally, we’re really good at hindsight in communications,” Colburn noted in his presentation, calling this kind of measurement “hindsight.” And AI can make that hindsight part of measurement even faster and more effective. AI can quickly generate clip reports that don’t merely show you all media you’ve generated, but more personalized reports that show all pieces from a trade journal, for instance, or from a specific geographic location.

Colburn also sees a great opportunity for what he calls insight: real-time information that can help us respond in the moment.

“As news is spooling out, how can we be looking at clips, how can we be looking at news articles that come online, in a more thoughtful and nuanced way?” Colburn mused. He noted that competitive or comparative analysis is also a strength of AI. For instance, AI can help see how coverage of Microsoft stacks up against coverage of Google on a certain topic or even compare coverage of five different Congresspeople. That kind of analysis, if delivered by a human, could take hours, while an AI can deliver it all but instantly.

But what Colburn considers most exciting is foresight, which allows us to use AI to peer into the future and “make better decisions.”

“They don’t just become a record of what’s occurred, but a little bit of a guidepost for where you might want to go,” Colburn said.

For instance, foresight can help us:

  • Identify reporters to pitch to based on their past coverage.
  • Help identify what else reporters you’ve worked with in the past might be interested in covering next.
  • Identify issues or problems with published stories for faster correction.
  • Analyze who’s really reading stories and better understand audiences.

Using AI for earned media

In addition to using AI for measurement tasks, Microsoft has also mapped its entire earned media process, from story pitch to publication, and identified several areas where AI can automate or act as a copilot.

Automative opportunities include creating reports or advisories, what Colburn refers to as “drudgery” tasks. But AI acting as a copilot — echoing the name of Microsoft’s flagship AI product — also offers opportunities to advise humans without taking over the whole show.

Suggestions for using AI in the media relations cycle include:

  • Giving AI your best story ideas and asking for what you’re missing.
  • Giving AI your initial talking points and asking it to add its own.
  • Showing AI the reporters you’re going to pitch to and asking who else you should consider.
  • Asking AI to review a reporter’s past work to identify the kinds of questions they’re likely to ask in an interview.

All of these tactics not only save time, but in Colburn’s words, they help you “think around the corner a little bit” and put out your best work — with a little extra help.

Watch Colburn’s full presentation.

 

The post How Microsoft is using AI for measurement for its own comms appeared first on PR Daily.

]]>
https://www.prdaily.com/how-microsoft-is-using-ai-for-measurement-for-its-own-comms/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-6/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-6/#respond Thu, 29 Feb 2024 11:00:38 +0000 https://www.prdaily.com/?p=342155 The latest on risks, regulation and uses. AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories. Here’s what communicators need to know.  AI risks One of the biggest concerns about generative […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
The latest on risks, regulation and uses.

AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories.

Here’s what communicators need to know. 

AI risks

One of the biggest concerns about generative AI is the possibility of building bias into machine learning systems that can influence output. It appears that Google may have overcorrected for this possibility with the image generation tools in its newly renamed AI tool Gemini.

The New York Times reported that Google temporarily suspended Gemini’s ability to generate images of people after the tool  returned a number of AI-generated images that fumbled the ball by over-indexing on including women and people of color, even when this led to historical misrepresentations or simply refusing to show white people.

Among the missteps, Gemini returned images of Asian women and Black men in Nazi uniforms when asked to show a German soldier in 1943, and refused to show images of white couples when asked. 

In a statement posted to X, Google’s Comms team wrote, “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

This issue highlights Google’s challenges to overcome the biases present on the broader internet, which fuels its AI generation tool, without going too far in the other direction. 

Finally, a reminder that what comes from generative AI is often made of pure imagination. 

Business Insider reports that families were enticed with beautiful, AI-generated fantasies of a candy-filled extravaganza that nodded to Willy Wonka. But families in Scotland forked over the equivalent of $44 for a barren warehouse with a few banners taped to the walls, photos revealed.

It’s a sad reminder that unscrupulous people will continue using AI in ways big and small, all eroding at trust overall. Expect warier, more suspicious consumers moving forward as we all begin to question what’s real and what’s illusion.

Regulation

Microsoft’s AI partnerships are once more under scrutiny by regulators. This time, the tech giant’s collaboration with the French Mistral AI has drawn the attention of the EU, Reuters reported. Microsoft invested $16 million into the startup in  hopes of incorporating Mistral’s models into its Azure platform. Some EU lawmakers are already demanding an investigation as Microsoft seems set to gain even more power in the AI space. Investigations are already underway due to Microsoft’s stake in OpenAI, maker of ChatGPT.

But the investigations reveal broader cracks in the EU’s views toward AI. As Reuters reports:

Alongside Germany and Italy, France also pushed for exemptions for companies making generative AI models, to protect European startups such as Mistral from over-regulation.

“That story seems to have been a front for American-influenced big tech lobby,” said Kim van Sparrentak, an MEP who worked closely on the AI Act. “The Act almost collapsed under the guise of no rules for ‘European champions’, and now look. European regulators have been played.”

A third MEP, Alexandra Geese, told Reuters the announcement raised legitimate questions over Mistral and the French government’s behaviour during the negotiations.

“There is a concentration of money and power here like the world has never seen, and I think this warrants an investigation.”

In the United States, Congress has created a bipartisan task force focused on AI and how to combat the negative implications, like deepfakes and job loss, even as the nation acts as an international leader in the development of the field, NBC News reported. Twelve members from each party will join the task force.

But don’t expect sweeping legislative priorities out of the task force. NBC News describes the task force’s mission as “writing a comprehensive report that will include guiding principles, recommendations and policy proposals developed with help from House committees of jurisdiction.” 

Some think Congress isn’t moving fast enough to put recommendations and policies into effect, so they’re taking matters into their own hands. California, the largest state in the nation, intends to roll out legislation in the near future to regulate AI in the state, which is home to many tech companies.

“I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law,” California Democratic state Senator Scott Wiener, of San Francisco, told NPR. 

The California measure, Senate Bill 1047, would require companies building the largest and most powerful AI models to test for safety before releasing those models to the public.

AI companies would have to tell the state about testing protocols, guardrails and if the tech causes “critical harm,” California’s attorney general could sue.

Wiener says his legislation draws heavily on the Biden Administration’s 2023 executive order on AI.

This floats the very real possibility that America could see a patchwork of regulations in the AI space if Congress doesn’t get its act together – and soon.

AI use cases

Finally, we know what’s scary about AI, we know what governments wants to do with AI, but how are companies using AI today? 

The news industry continues to be especially interested in AI. Politico published an interview with Oxford doctoral candidate Felix M. Simon about how AI has already descended on the industry, impacting everything from article recommendations in news apps to, yes, how, the news gets made.

Simple, non-terrifying use cases include giving AI long-form content and having it digest the piece into bullet points for easy consumption, or having an AI-generated voice read an article aloud. But the more frightening possibilities include using AI to replace human reporters, to churn out mass quantities of stories instead of focusing on quality, and Big Tech fully taking control of media through its ownership of AI.

In related news, Google is paying small news publishers to use its AI tools to create content, Adweek reported. The independent publishers will receive sums in the five-figure range to post content over the course of a year. The tool, which is not currently available for public use, indexes recent reports, such as from government agencies, and summarizes them for easy publication.

“In partnership with news publishers, especially smaller publishers, we’re in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work,” reads a statement from Google shared with Adweek. “These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

Still, it seems naive to think that these tools won’t replace at least some journalists, no matter what everyone would like to believe.

Lending company Klarna says its use of AI has enabled it to replace 700 human employees – coincidentally, the company says, the same number of people it recently laid off. Fast Company reports that Klarna has gone all-in on AI for customer service, where it currently accounts for two-thirds of all customer conversations, with similar satisfaction ratings as to humans. 

Whether you view this all as inevitable progress, nightmare fuel or a bit of both, there is likely no escaping the AI onslaught. That’s according to JPMorgan Chase CEO Jamie Dimon.

“This is not hype,” Dimon told CNBC.” This is real. When we had the internet bubble the first time around … that was hype. This is not hype. It’s real. People are deploying it at different speeds, but it will handle a tremendous amount of stuff.”

Guess we’ll find out. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-6/feed/ 0
5 ways PR pros want to use AI in the future https://www.prdaily.com/5-ways-pr-pros-want-to-use-ai-in-the-future/ https://www.prdaily.com/5-ways-pr-pros-want-to-use-ai-in-the-future/#comments Fri, 09 Feb 2024 11:00:26 +0000 https://www.prdaily.com/?p=341882 There’s no doubt that artificial intelligence has overhauled the practice of public relations, impacting the tools professionals use and the way they develop emails, press releases and social media copy — and introducing new pitfalls and anxieties.  ACCESSWIRE and PR Daily partnered to find out how PR professionals are currently thinking about AI — whether […]

The post 5 ways PR pros want to use AI in the future appeared first on PR Daily.

]]>
There’s no doubt that artificial intelligence has overhauled the practice of public relations, impacting the tools professionals use and the way they develop emails, press releases and social media copy — and introducing new pitfalls and anxieties. 

ACCESSWIRE and PR Daily partnered to find out how PR professionals are currently thinking about AI — whether they use it, what they want to apply it to, what they’re excited about and what they’re afraid might happen. The result is our latest report, “The Future of AI in PR.” 

The insights in this report can help guide the development of tools and the way they’re shared and implemented, both within organizations and the wider world.  

Here’s a look at some of the takeaways from the report. 

Of the more than 200 responses to the survey, 64% had not yet incorporated AI tools into their workstreams, while 36% had. We asked PR pros what they hope to use AI for in the future and where they can see it creating efficiencies. 

 

Accesswire survey results

 

Seventy-one percenthope that AI will assist in content generation. This stat will likely come as no surprise given the ubiquity and rise of chatbots that can write copy in a flash and image-generation tools. The majority of respondents who are currently using AI said that they are using it for content generation, but they find that the work AI tools produce still needs a heavy edit. They hope that this will improve in the future so the tools can be used for, “social media post tweaking, generating basic surveys, creating talking points, help with FAQ documents, and helping to turn drafts into better content.” 

  • 59% seek predictive analytics for PR planning. Conclusions around cost-benefit, budgeting and product launches may be more precise and comprehensive with mass data analysis. 
  • 42% want to use AI for automated media sentiment analysis. Although many social listening tools use AI for this purpose, they can be pricey. In the future, developments in AI may enable organizations to bring this in-house. 
  • 29% want AI to help maximize crisis management systems. Perhaps influenced by the past year, marked by brand backlash and social media platform upheaval, AI has the potential to more quickly analyze sentiment and develop frameworks and response plans. This priority may also have a content-generation angle: “If a company is going through a small media crisis, AI could be used to quickly create the content that is time sensitive to the crisis,” one respondent said. 
  • 29% look forward to the use of more chatbots for customer engagement. Although this technology has been around for quite some time, chatbots have historically not been able to solve or meet all customer needs; more advanced conversational data emerging from new AI tools is poised to ultimately expand the capabilities of automated customer service and interaction. 

There’s far more to be found the report from Ragan and Accesswire, including PR pros’ concerns and fears — notably the “loss of personal touch in communication” — as well as their thoughts on challenges they are currently facing with the technology, factors they believe will lead to success, and their overall outlook on AI.  

One thing is certain: AI is part of the future of PR. “AI is going to be an integral part of the PR/Comms profession going forward,” one respondent said. “Practitioners who don’t adapt will be left behind.” 

Read on in the full report from PR Daily and ACCESSWIRE. 

 

The post 5 ways PR pros want to use AI in the future appeared first on PR Daily.

]]>
https://www.prdaily.com/5-ways-pr-pros-want-to-use-ai-in-the-future/feed/ 1
The Future of AI in Public Relations https://www.prdaily.com/the-future-of-ai-in-public-relations/ https://www.prdaily.com/the-future-of-ai-in-public-relations/#respond Thu, 08 Feb 2024 09:00:13 +0000 https://www.prdaily.com/?p=341839 Artificial intelligence is fundamentally changing the discipline of public relations.  

The post The Future of AI in Public Relations appeared first on PR Daily.

]]>
Artificial intelligence is fundamentally changing the discipline of public relations.

 

The post The Future of AI in Public Relations appeared first on PR Daily.

]]>
https://www.prdaily.com/the-future-of-ai-in-public-relations/feed/ 0
Communicators need to shed cameo role for the lead https://www.prdaily.com/communicators-need-to-shed-cameo-role-for-the-lead/ https://www.prdaily.com/communicators-need-to-shed-cameo-role-for-the-lead/#respond Wed, 07 Feb 2024 12:00:24 +0000 https://www.prdaily.com/?p=341841 How to take your star turn. ➢ Communicators have a steady seat in the boardroom and are taking an active role in crafting corporate policy and voting on pivotal issues. ➢ Generative AI wipes out the busy work and allows communicators time to be strategic, creative and proactive. ➢ The word “strategic” has been scrapped […]

The post Communicators need to shed cameo role for the lead appeared first on PR Daily.

]]>
How to take your star turn.

➢ Communicators have a steady seat in the boardroom and are taking an active role in crafting corporate policy and voting on pivotal issues.

➢ Generative AI wipes out the busy work and allows communicators time to be strategic, creative and proactive.

➢ The word “strategic” has been scrapped from the term Strategic Communications for its obvious redundancy, and the Chief Communications Officer now reports to the CEO.

➢ DEI and ESG are no longer polarizing labels as the practices of inclusion, diversity and sustainability are as normalized as media relations and community relations.

Is this the future of communications, or is this just a pipe dream? For most communicators, it’s hard to imagine a future in which the scenarios above come to fruition.  

There’s a small cohort, perhaps the ones attending Davos or other economic global forums, who have the seat at the table and the ear of the C-suite. But for most communicators, you are just too busy getting through the day.  

You say you’re too busy. In Ragan’s 2024 Communications Benchmark Report, communicators cite that the top reason they can’t be more strategic is because they are being pulled in too many directions, with tasks and requests that keep them from big-picture strategy. This answer has topped the other choices for the past six years of the Benchmark Report.   

 

 

The last several years have been seismic for communicators. As the stakes were raised during the early stages of the pandemic, and amid social justice and geopolitical unrest, communications met the moment. In my three decades in this space, I’ve never seen so much positive movement.  

Communicators were front and center, keeping stakeholders informed, employees safe and connected. They weren’t in the boardroom, per se, but they were (and arguably are today) at the heart of their organization, not missing a beat.  

The risk is real

But the more things changed, the less it stuck. As we look to the near future, we risk a slide back.  

The tremendous influence and authority gained from 2020 to 2023 is at risk of slipping through the many priorities organizations face unless there is a collective awareness that Comms is still taking a back seat to other roles in the organization. Communicators need to come together around the core issues impacting society and their organizations and assume a role they might not have deemed themselves worthy of when they first entered the profession.  

The stage is set to take the lead role on critical issues of the day: AI’s impact on work and society, employee upskilling, brand management and social issues, misinformation management and ensuring a reasonably diverse and inclusive work culture.  

We are not talking side character or cameo roles — comms should be the lead role in this regular series.  To do this, it’s critical that communicators get curious beyond the walls of its own comms department.  

Here are some ways forward: 

Play in the AI sandbox: Dabble in the potential of AI for you and your team and for the larger organization, asking questions that will positively transform business. Play with AI rather than pray that it won’t impact you. Partner with other communicators to create a framework that moves our profession forward.  

Become business fluent: Treat it like learning a new language and commit to diving into the numbers, getting curious about the ecosystem that drives your business and dashboarding KPIs that truly tie comms to business growth. 

Take the lead in upskilling: AI has accelerated the need for most professionals to develop new skills and competencies (upskilling has always been important). In addition to ensuring you and your comms team are learning new skills, you have the chance to be at the table formulating and overseeing a talent revolution. Somebody’s got to do it – why not you? 

Be comfortable in the fog: With the U.S. election and nearly 40 other elections around the globe in 2024, this will undoubtedly be another year of uncertainty and division within your organization and among your customers and other stakeholders. Communicators will need to manage the murkiness and be the voice of reason, stability and truth. 

Stop being so busy: As mentioned earlier, communicators are busy bees. But as you commit to taking the lead on upskilling, AI and strategic business counseling you’ll find that the stage is yours to take the lead. Decide where you need to spend your time or someone else will decide for you. 

This is all to say: Buckle up, communicators, for an exhilarating ride.  

Diane Schwartz is the CEO of Ragan Communications.  

 

The post Communicators need to shed cameo role for the lead appeared first on PR Daily.

]]>
https://www.prdaily.com/communicators-need-to-shed-cameo-role-for-the-lead/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-5/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-5/#respond Thu, 01 Feb 2024 09:00:36 +0000 https://www.prdaily.com/?p=340248 Updates on risks, regulations, tools and AI at work.  As the first month of the new year ends, there is no shortage of AI news for communicators to catch up on. This week, we’ll look at the growing threat of AI deepfakes, clarity on how Washington’s seemingly glacial measures to regulate AI for businesses will […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
Updates on risks, regulations, tools and AI at work. 

As the first month of the new year ends, there is no shortage of AI news for communicators to catch up on.

This week, we’ll look at the growing threat of AI deepfakes, clarity on how Washington’s seemingly glacial measures to regulate AI for businesses will apply in practice, along with new tools, initiatives and research that can foster a healthy and non-dystopian future with your AI partners, both in and out of work.

Risks

Many of the fears about deepfakes and other deceptive uses of AI came home to roost in the past few weeks. Most notably, X was flooded with non-consensual, explicit AI-generated photos of Taylor Swift. There was so much content that the social media platform temporarily removed the ability to search for the star’s name in an attempt to dampen its reach.

The scale and scope of the deepfakes – and Swift’s status as one of the most famous women in the world – catapulted the issue to the very highest echelons of power. “There should be legislation, obviously, to deal with this issue,” White House press secretary Karine Jean-Pierre said. 

Microsoft CEO Satya Nadella cited the incident as part of a need for “all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced. And there’s a lot to be done and a lot being done there,” Variety reported

But the problem extends far beyond any one person. Entire YouTube ecosystems are popping up to create deepfakes that spread fake news about Black celebrities and earn tens of millions of views in the process. 

Outside of multimedia, scammers are scraping content from legitimate sites like 404 Media, rewriting it with generative AI, and re-posting it to farm clicks, sometimes ranking on Google above the original content, Business Insider reported. Unscrupulous people are even generating fake obituaries in an attempt to cash in on highly searched deaths, such as a student who died after falling onto subway tracks. The information isn’t correct, and it harms grieving families, according to Business Insider. 

That pain is real, but on a broader level, this fake content also threatens the bedrock of the modern internet: quality search functions. Google is taking action against some of the scammers, but the problem is only going to get worse. Left unchecked, the problem could alter the way we find information on the internet and deepen the crisis of fake news.

And unfortunately, the quality of deepfakes keeps increasing, further complicating the ability to tell truth from fiction. Audio deepfakes are getting better, targeting not only world leaders like Joe Biden and Vladimir Putin, but also more minor figures like a high school principal in Maryland

These clips reanimate the dead and put words into their mouths, as in the case of an AI-generated George Carlin. They are also coming for our history, enabling the creation of authentic-seeming “documents” from the past that can deeply reshape our present by stoking animus. 

It’s a gloomy, frightening update. Sorry for that. But people are fighting to help us see what’s real, what’s not and how to use these tools responsibly, including a new initiative to help teens better understand generative AI. And there are regulations in motion that could help fight back. 

Regulation and government oversight 

This week, the White House followed up on its executive order announced last November with an update on key, coordinated actions being taken at the federal level. 

The Order directed sweeping action to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more,” the statement reads.

The statement goes on to explain the convening of a White House AI council, which will include top federal officials from a range of departments and agencies. These agencies have completed all of the 90-day actions they were tasked with and made progress toward other, long-term directives.

“Taken together, these activities mark substantial progress in achieving the EO’s mandate to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond,” the statement continues.

Regulatory steps taken to mitigate safety and security risks include:

  • Activating the Defense Production Act to require that AI systems developers report “vital information” like AI safety test results to the Department of Commerce.
  • A proposed rule from the Department of Commerce would require U.S. cloud computing companies to report if they are providing AI training to foreign clients.
  • Risk assessments around AI’s use in critical infrastructure sectors. These were conducted by nine agencies including the Department of Defense, the Department of Transportation, the Department of Treasury and the Department of Health and Human Services.

Focusing on the mandated safety tests for AI companies, ABC News reports:

The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests. The government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.

Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants “to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”

Regulatory steps to “innovate AI for good” include:

  • The pilot launch of the National AI Research Resource, managed by the U.S. National Science Foundation as a catalyst for building an equitable national infrastructure to deliver data, software, access to AI models and other training resources to students and researchers. 
  • The launch of an AI Talent Surge program aimed at hiring AI professionals across the federal government. 
  • The start of the EducateAI initiative, aimed at funding AI educational opportunities for K-12 and undergraduate students. 
  • The funding of programs aimed at advancing AI’s influence in fields like regenerative medicine. 
  • The establishment of an AI Task Force specific to the Department of Health and Human Services will develop policies and bring regulatory clarity to how these policies can jumpstart AI innovation in healthcare. 

While the previous executive order offered suggestions and recommendations, these directives on AI mark the first tangible set of actions and requirements issued by the Biden-Harris administration. As the ABC coverage notes, however, the absence of a common standard for evaluating these systems for safety still leaves many questions. 

For now, communicators can take inspiration from the style and structure of this fact sheet – note the chart summarizing specific actions of agencies, even though the text is too small to read without zooming in.

Expect to hear more in the coming weeks about what AI business leaders learn from these safety and security mandates. Clarity and transparency on these processes may be slow coming, but these requirements amount to progress nonetheless. 

Because this regulation may also shed light on how certain companies are safeguarding your data, what we learn can also inform which programs and services your comms department decides to invest in. 

Tools and initiatives

China put its AI building into overdrive, pumping out 40 government-approved large language models (LLMs) in just the last six months, Business Insider reported, including 14 LLMs in the past week

Many of the projects come from names known in the U.S. as well: Chinese search giant Baidu is the dominant force, but cellphone makers Huawei and Xiaomi are also making a splash, as is TikTok owner Bytedance. Bytedance caused controversy by allegedly using ChatGPT to build its own rival model, and creating a generative audio tool that could be responsible for some of the deepfakes we discussed earlier. 

It’s unclear how much traction these tools might get in the U.S.: Strict government regulations forbid these tools from talking about “illegal” topics, such as Taiwan. Additionally, the U.S. government continues to put a damper on Chinese AI ambitions by hampering the sale of semiconductors needed to train these models. But these Chinese tools are worth watching and understanding as they serve one of the biggest audiences on the planet. 

Yelp, long a platform that relied on reviews and photos from real users to help customers choose restaurants and other services, will now draw from those reviews with an AI summary of a business, TechCrunch reported. In an example screenshot, a restaurant was summarized as: “Retro diner known for its classic cheeseburgers and affordable prices.” While this use of AI can help digest large amounts of data into a single sentence, it could also hamper the human-driven feel of the platform in the long run. 

Copyright continues to be an overarching – and currently unsettled – issue in AI. Some artists are done waiting for court cases and are instead fighting back by “poisoning” their artwork in the virtual eyes of AI bots. Using a tool called Nightshade, artists can use an invisible-to-humans tag that confuses AI, convincing them, for instance, that an image of a cat is an image of a dog. The purpose is to thwart image-generation tools that learn on artwork they do not have own the copyright for – and to put some control back into the hands of artists.

Expect to see more tools like this until the broader questions are settled in courts around the world. 

AI at work

There’s no shortage of research on how AI will continue to impact the way we work.

A recent MIT Study, “Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?” suggests that AI isn’t replacing most jobs yet because it hasn’t been a cost-effective solution to adopt across an enterprise.

“While 36% of jobs in U.S. non-farm businesses have at least one task that is exposed to computer vision,” the study reads,  “only 8% (23% of them) have a least one task that is economically attractive for their firm to automate.”

“Rather than seeing humans fade away from the workforce and machines lining up, I invite you to envision a new scenario,” AI expert, author, and President/CEO of OSF Digital Gerard “Gerry” Szatvanyi told Ragan in his read on the research.

“Instead, picture increased efficiency leading to higher profits, which might be reinvested in technology, used to raise worker wages, or applied to training programs to re-skill employees. By and large, employees will enjoy the chance to learn and grow because of AI.”

A recent Axios piece supports Szatvany’s vision, with reporter Neil Irwin identifying a theme emerging in his conversations with business leaders: “That AI-driven productivity gains are the world’s best hope to limit the pain of a demographic squeeze”:

“The skills required for every job will change,” Katy George, chief people officer at McKinsey & Co., told Axios. The open question, she said, is whether “we just exacerbate some of the problems that we’ve seen with previous waves of automation, but now in the knowledge sector, as well.”

While avoiding a “demographic squeeze” is a noble goal, focusing on the use cases that can streamline productivity and improve mental health continues to be a practical place to start. One organization answering this call is Atrium Health, which launched a pilot AI program focused on improving operational efficiency and minimizing burnout for healthcare professionals. Its DAX Copilot program can write patient summaries for doctors as they talk -– provided the patient has given consent. 

“I have a draft within 15 seconds and that has sifted through all the banter and small talk, it excludes it and takes the clinical information and puts it in a format that I can use,” Atrium senior medical director for primary care Dr. Matt Anderson told WNC Charlotte. 

It’s worth noting that this industry-specific example of how AI can be used to automate time-consuming tasks doesn’t negate Dr. Anderson’s skills, but allows him to demonstrate them and give full attention to the patient.

Remember,  AI can also be used to automate other industry-agnostic tasks beyond note-taking. Forbes offers a step-by-step guide for applying AI to spreadsheets for advanced data analysis using ChatGPT’s data analyst GPT. You can ask the tool to pull out insights that might not be obvious, or trends that you wouldn’t identify on your own.

As with any AI use case, the key is to ask good questions. 

Learning these kinds of AI skills across multiple tools can help you grow into an AI generalist, but those hoping to transition into AI-specific roles will want a specialist who understands the nuances of specific and proprietary tools, too, according to Mike Beckley’s recent piece on FastCompany:

“People want to move fast in AI and candidates need to be able to show that they have a track record of applying the technology to a project. .While reading papers, blogging about AI, and being able to talk about what’s in the news shows curiosity and passion and desire, it won’t stack up to another candidate’s ability to execute. Ultimately, be ready to define and defend how you’ve used AI.” 

This should serve as your latest reminder to start experimenting with new use cases.. Focus on time and money saved, deliverables met, and how AI helps you get there. You got this. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-5/feed/ 0
AI for communicators: What’s new and notable https://www.prdaily.com/ai-for-communicators-whats-new-and-notable-2/ https://www.prdaily.com/ai-for-communicators-whats-new-and-notable-2/#comments Thu, 18 Jan 2024 09:00:28 +0000 https://www.prdaily.com/?p=340072 No shortage of news for comms pros. It’s a big week for AI – but then, most weeks for the last 14 months or so have been big weeks for AI. Still, new tools are being rolled out by the biggest players in the industry, progress on regulation is inching forward and deep fakes are […]

The post AI for communicators: What’s new and notable appeared first on PR Daily.

]]>
No shortage of news for comms pros.


It’s a big week for AI – but then, most weeks for the last 14 months or so have been big weeks for AI. Still, new tools are being rolled out by the biggest players in the industry, progress on regulation is inching forward and deep fakes are coming to make the 2024 U.S. election even more interesting.

Here’s what’s new and what it means for communicators.

New tools and uses

Both Microsoft and OpenAI have rolled out major new tools and pricing packages in the last week, further cementing the companies as frenemies (Microsoft has a large stake in OpenAI) and front-runners in the consumer AI industry. 

Microsoft is now offering a supercharged version of its free Copilot assistant for $20 each month. The tech giant is offering all the benefits of Copilot, plus access to GPT 4.0 during peak times, faster image creation and the ability to use Copilot in some Microsoft Office tools to summarize documents and more. These are certainly superuser nice-to-haves, but if you haven’t tried out Copilot yet, this could be the time to play around and see what Microsoft has to offer.

OpenAI is also offering additional paid products. First is its long-awaited business tier, ChatGPT Team, which offers a happy medium between its enterprise offering and its individual subscriptions. ChatGPT Team offers smaller organizations data privacy protection, custom GPTs and other perks at a price point of $25-30 per person, per month, depending on billing preferences.

The ChatGPT Store is also opening its doors, allowing users to create their own bots which they can sell under a soon-to-roll-out revenue sharing plan. The custom bots are available only for users of Pro, Team or Enterprise accounts. Bots run the full gamut, from writing coaches and coding tools to GPTs that design your tattoo or create your star chart.

While these two players are leading the way when it comes to consumer-focused AI tech, there are intriguing new tools being rolled out by companies every day. One clever use is at Sam’s Club, where the visual AI is being used to eyeball what’s in your cart rather than having a human being check your receipt against your cart contents. While this technology exists in some small convenience stores, Walmart (which owns Sam’s Club) notes this is one of the first large-scale use of the technology. But we can certainly expect more to come. 

Regulations

As technological capabilities race ahead, regulations are proceeding at a much slower pace. But they are proceeding. 

The World Economic Forum in Davos, Switzerland brings together some of the biggest governments, companies and other dominant global players. It’s where decisions are made far above the gaze of mere mortals like us. And it does seem like things are being hashed out in the realm of AI. Microsoft CEO Satya Nadella said he does see a consensus emerging around AI, according to CNBC, and was welcoming of global AI rules. 

“I think [a global regulatory approach to AI is] very desirable, because I think we’re now at this point where these are global challenges that require global norms and global standards,” Nadella said. 

But Nadella may feel less positive about EU rumblings about a merger investigation into the partnership between Microsoft and OpenAI. CNN reports that the EU is only the latest to express concern over Microsoft’s stake in the company, which it denies is a merger. Both the U.S. and U.K. have also launched preliminary probes into the partnership. Given that these organizations are emerging, both jointly and separately, as the dominant players in the space, this is one to watch.

Risks

Even as the promises of AI become more apparent, so do the risks. We’ve yet another reminder of how powerful AI is in misleading people and the risks it poses to brand safety and democracy as a whole.

In an insidious twist on deepfakes, “Taylor Swift” was “seen” hocking Le Creuset cookware, the New York Times reported. The megastar has publicly expressed her love for the pricy pots. But social media ads are not only lying about showing Swift, they’re also lying about being associated with Le Creuset. The ads are promoting a giveaway of the cookware, but the brand denies involvement. It’s a scam that targets two high-end, high-trust brands, made more plausible by the fact that Swift has expressed affinity for Le Creuset in the past. 

That situation is bad enough. But AI is taking on decidedly darker purposes in hands of 4chan, a message board infamous for its trolling. Another New York Times report chronicled how AI is being used to attack the judicial system, including members of parole boards. 

A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with A.I. tools to make her appear naked. They then shared the manipulated files on 4chan, an anonymous message board known for fostering harassment, and spreading hateful content and conspiracy theories.

4chan has also used AI to make it appear that judges are making racist comments. It’s all proof that even a small amount of video footage is now dangerous in the wrong hands. Vigorous monitoring is required to protect members of your organization. 

OpenAI this week announced the steps it will take to attempt to prevent the tool’s misuse during the upcoming elections around the world. While its efforts are almost certainly doomed to failure, there are attempts being made, including efforts to prevent abuse, such as “misleading ‘deepfakes’, scaled influence operations, or chatbots impersonating candidate,” according to a blog post from OpenAI. They’ve also pledged to institute digital watermarks that will help people identify images made with its DALL-E generator, though their effectiveness is questionable

The effects of AI are expected to be significant in this election, however, no matter how hard anyone tries to contain it. The same is true of the workplace. A new report from the International Monetary Fund anticipates that 40% of all jobs will be impacted by AI – and that number jumps to 60% in advanced economies.

 According the BBC:

 In half of these instances, workers can expect to benefit from the integration of AI, which will enhance their productivity.

In other instances, AI will have the ability to perform key tasks that are currently executed by humans. This could lower demand for labour, affecting wages and even eradicating jobs.

Meanwhile, the IMF projects that the technology will affect just 26% of jobs in low-income countries.

In the meantime, let’s learn and do the best we can. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

 

The post AI for communicators: What’s new and notable appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-notable-2/feed/ 4
A PR professional’s guide: Promoting education technology in the age of AI https://www.prdaily.com/pr-professionals-guide-ed-tech-age-ai/ https://www.prdaily.com/pr-professionals-guide-ed-tech-age-ai/#respond Fri, 05 Jan 2024 12:00:58 +0000 https://www.prdaily.com/?p=339975 How to address public concerns. Sarah Toomey is a public relations associate at Raffetto Herman Strategic Communications   Educational institutions, like any other organization, may have concerns or reservations about the integration of artificial intelligence. Not all educational institutions are “scared” of AI, but they may approach its adoption cautiously due to several legitimate considerations, including […]

The post A PR professional’s guide: Promoting education technology in the age of AI appeared first on PR Daily.

]]>
How to address public concerns.


Sarah Toomey is a public relations associate at Raffetto Herman Strategic Communications  

Educational institutions, like any other organization, may have concerns or reservations about the integration of artificial intelligence. Not all educational institutions are “scared” of AI, but they may approach its adoption cautiously due to several legitimate considerations, including uncertainty about its impact, ethical and privacy concerns, concerns about digital overreliance and more. 

There are also some concerns that AI and automation may replace some educational roles, such as administrative tasks or even teaching positions. These fears can create resistance to AI adoption among educators and staff. Coupled with the fact that implementing AI systems often requires a significant initial investment in technology and training, schools and districts may be hesitant to allocate resources, especially if they are unsure of the long-term payoff. 

 

 

When it comes to supporting the PR goals of education technology companies that use AI, especially in the context of addressing public concerns about AI, PR professionals should consider these tips to design the most beneficial media campaigns. 

Educate and build trust 

Educating and building trust in AI-based education technology requires strategic communication and outreach methods. Changing misconceptions about adopting and integrating AI-based education technology should begin with illuminating its benefits.  

For this to be effective, ensure that your clients’ messaging about their AI-based education technology is clear, concise and easily accessible to the public. Consider incorporating visual aids into your campaigns where possible, like infographics, diagrams and charts. Whether in a full-length report or a social ad, visual representations can make complex ideas more accessible and memorable. 

Using media campaigns to explain how AI enhances the learning experience, personalizes education and ensures data privacy is key. Sharing success stories, case studies and endorsements from educators who have seen positive outcomes with AI technology can help ground swirling misconceptions and familiarize the public with its overriding advantages. 

Highlight ethical practices and compliance 

PR teams can address public concerns about AI ethics by showcasing their clients’ commitment to responsible AI practices. Highlight clients’ compliance with industry standards and regulations, as well as any additional voluntary measures they take to protect students’ data and privacy. Companies should feel empowered to be open about how their AI algorithms work without revealing proprietary details. Explaining the inputs, processes and outputs in a way that non-technical stakeholders can understand provides insight into the decision-making processes of your AI systems. 

As AI protocol develops, take stock of active company decisions through an AI equity lens, including bias mitigation, fairness and inclusivity. Consider organizing webinars, panel discussions or interviews with experts in AI ethics to demonstrate your clients’ dedication to continual AI equity, accessibility and safety. 

PR professionals should suggest issuing press releases and announcements to publicize their partner companies’ AI ethics initiatives and achievements. Inclusively, support the social leg of companies’ feedback channels by monitoring for input and clearly communicating their commitment to listening to and addressing feedback. Periodically, transparency reports or case studies showcasing real-world examples of how companies have addressed AI ethics challenges can help bolster confidence and may provide data-driven stems for more robust thought leadership campaigns. 

Maximize thought leadership potential  

Position your education technology client as a thought leader in the intersection of AI and education. Once companies have their own approach to AI mapped out, PR partners should motivate them to pivot and offer insights into the future of AI in education, discussing the latest trends and providing guidance to educators and administrators on how to make the most of AI-powered tools.  

In such a fast-paced and dominating space, it can be daunting to break into news cycles around AI. Finding the right niche is key. Even non-AI-powered companies can speak to the AI revolution in terms of how their tool can or should integrate with AI, the human skills their tool elevates against the backdrop of AI, or planned future AI adaptations. For AI-powered education technology, keeping the focus on student success, complementing teacher instruction, and lightening the load for taxed post-pandemic administrators is a strong, product-aligned core message to follow through. 

By positioning the client as a trusted authority in the spaces most relevant to their particular platform/solution, PR teams can continue to shape the conversation around AI in education and help allay public concerns. For a comprehensive approach, employ a content strategy that includes blog posts, whitepapers, research reports, and op-eds that address AI-related topics in education. All content should be well-researched, data-driven, and provide actionable insights for educators, administrators, and the industry as a whole. 

Looking Forward 

In addition to these tips, always be prepared to respond to questions and concerns from the public in a timely and transparent manner. Public relations efforts should not only focus on proactive messaging but also on addressing any issues or misconceptions as they arise. Creating a comprehensive crisis plan for an education technology company to address fears about AI adoption involves a strong PR and media component. PR professionals play a crucial role in crafting messages, managing media relations and ensuring that the company’s response is effective in alleviating concerns. 

Building and maintaining a positive reputation for your education technology client in the context of AI will require ongoing efforts to inform, engage and build trust with the public. Stay informed about the regulatory landscape for AI in education and engage with relevant authorities and organizations to ensure your client’s technology aligns with industry standards and best practices. Being proactive in regulatory compliance can greatly enhance trust among all stakeholders, support sales and marketing efforts, and –   at the consumer level – equip students with tools to vastly accelerate comprehension and achievement. 

The post A PR professional’s guide: Promoting education technology in the age of AI appeared first on PR Daily.

]]>
https://www.prdaily.com/pr-professionals-guide-ed-tech-age-ai/feed/ 0
By the numbers: How Gen Z uses AI, social media https://www.prdaily.com/by-the-numbers-how-gen-z-uses-ai-social-media/ https://www.prdaily.com/by-the-numbers-how-gen-z-uses-ai-social-media/#respond Thu, 04 Jan 2024 11:00:08 +0000 https://www.prdaily.com/?p=339948 Their motivations for using social media may not be what you think. Gen Z, currently ages 13-26, are quickly becoming the most coveted target demographic in the United States. They’re trendsetters in high school, college and even in the workplace. And everyone wants to better understand how to reach these up-and-comers and what makes them […]

The post By the numbers: How Gen Z uses AI, social media appeared first on PR Daily.

]]>
Their motivations for using social media may not be what you think.


Gen Z, currently ages 13-26, are quickly becoming the most coveted target demographic in the United States. They’re trendsetters in high school, college and even in the workplace. And everyone wants to better understand how to reach these up-and-comers and what makes them tick.
 

A new analyst report from Morning Consult polled 1,000 people from this generation and delivered insights that PR pros should know. From what they’re looking for on social media to how often they use generative AI tools, this information can help you form your campaigns and content in the year ahead. 

 

 

Here we are now, entertain us 

Gen Z is a generation raised on social media. Many have never known a world without social media; even the oldest were only seven years old when Facebook was launched. Morning Consult’s survey found that 53% of this cohort use these apps for four hours or more daily, while only 3% use them for less than an hour. It’s a staggering amount of time and means that social media is a nearly surefire way to reach Gen Z. 

But which networks? 

Which social networks Gen Z uses

If you really want to get in touch with Gen Z, you need to be thinking video in a big way, as that content feeds their social platforms of choice: YouTube, Instagram and TikTok. The survey also found that 65% of Gen Z prefer to learn things via video, with only 19% leaning toward the written word. Only 45% of the adult population at large wants to learn via video, while 35% of them want to read an article, the survey found.  

If you seek to appeal to Gen Z and don’t have a video strategy, you’re already far, far behind. 

But interestingly, Gen Z isn’t necessarily interested in making their own videos — or social media content in general, for that matter. 

Why Gen Z uses social media

Only 6% see posting on social media as their primary purpose on the platforms. Sixty-eight percent use it as older generations might view TV: a way to tune in and tune out. And 19% use these tools as a means of interpersonal communication rather than broader content posting. This is a major shift and one that brands can take advantage of. Gone are the days when social media users were primarily looking to connect with friends and family. Now, if it’s interesting, the content can come from anywhere — including from you. 

From AI to Z 

As you might expect, more members of Gen Z used generative AI in the last month (58%). But as you might not expect, Millennials use AI more frequently, with 19% of this elder cohort using the tools daily compared to just 10% of Gen Z. Is this because Millennials are firmly in the workplace and are incorporating the tools into their workflow?  

How Gen Z uses AI

 

More than 30% of Gen Z who are in the workplace do use AI as part of their employment, a strong number and a sign that we can expect more automation in the future as this generation rises and gains power. At the moment, however, Gen Z favors using generative AI to complete schoolwork, spur creativity, feed their hobbies or even help them find song recommendations. 

Look for innovative uses of AI both inside and outside the workplace. Talk to young people about how they’re making use of these tools, and you could find an application you’d never considered before. But also know that Gen Z will need training, guidance and ethics training to come up to speed on these tools as well. 

No matter what your generation is, be curious about how others are using the same tools you use, from social media to AI and beyond. More ideas are always better.  

The post By the numbers: How Gen Z uses AI, social media appeared first on PR Daily.

]]>
https://www.prdaily.com/by-the-numbers-how-gen-z-uses-ai-social-media/feed/ 0
AI for communicators: What’s new and notable https://www.prdaily.com/ai-for-communicators-whats-new-and-notable/ https://www.prdaily.com/ai-for-communicators-whats-new-and-notable/#respond Thu, 04 Jan 2024 10:00:50 +0000 https://www.prdaily.com/?p=339946 Including a major lawsuit from the New York Times and why this may be the year of copyright.  A new year brings a glut of new prediction stories, and, wouldn’t you know it, more predictions on how AI will impact our work and lives. Of course, the newest developments in AI aren’t all looking forward […]

The post AI for communicators: What’s new and notable appeared first on PR Daily.

]]>
Including a major lawsuit from the New York Times and why this may be the year of copyright. 


A new year brings a glut of new prediction stories, and, wouldn’t you know it, more predictions on how AI will impact our work and lives. Of course, the newest developments in AI aren’t all looking forward — there’s much happening right now that you’ll want to keep up with. One item to especially keep an eye on in 2024 is copyright. Whether we’re talking about the rights of content owners who are being used to train large language models (LLMs) or iconic characters entering the public domain, we may all quickly try to become legal experts as this obscure form of law takes center stage.

Here are some of the biggest stories from the last two weeks – and what they mean for communicators. 

 

 

Risks and regulation

While many of us were on holiday break, the New York Times sued OpenAI and Microsoft, claiming that millions of articles from the publication were used to train AI chatbot LLMs without authorization. This makes the legacy outlet one of the first major American media organizations to sue the companies over copyright issues with its written works. 

The Times also claims that the articles used to train ChatGPT  make the chatbots a competitor to the Times as a source of reliable information.

According to an article on the lawsuit:

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

In its complaint, The Times said it approached Microsoft and OpenAI in April to raise concerns about the use of its intellectual property and explore “an amicable resolution,” possibly involving a commercial agreement and “technological guardrails” around generative A.I. products. But it said the talks had not produced a resolution.

Between the lack of monetary demand and its focus on setting up guardrails, the Times positions itself as an influential advocate for reform and regulation. 

No wonder Axios is calling copyright law “AI’s 2024 battlefield.” Their article draws a tether between the Times lawsuit and last year’s array of authors, led by comedian Sarah Silverman, who also sued OpenAI for using their copyrighted work in its learning models.

“The copyright decisions coming down the pike — over both the use of copyrighted material in the development of AI systems and also the status of works that are created by or with the help of AI — are crucial to the technology’s future and could determine winners and losers in the market,” writes Axios.

Reuters agrees and, in a piece recapping the copyright cases filed so far, outlines its own suit against information services company Ross Intelligence, which Thomson Reuters accuses of illegally copying thousands of notes from the publisher’s legal platform to train an AI-based search engine.

The piece also suggests that licensing is a potential next step:

“Licensing the copyrighted materials to train their LLMs may be expensive — and indeed it should be given the enormous part of the value of any LLM that is attributable to professionally created texts,” writers trade group The Authors Guild told the copyright office.

While lawsuits tend to move at a glacial pace, any potential decision could greatly impact the ability of enterprise-level AI chatbots to effectively mine copyrighted work–and offer the same quality of outputs you and your teams have gotten used to. Any potential budget or planned spend for AI technology should be applied sparingly, and perhaps on a monthly subscription basis, with the understanding that future rulings have the potential to impact the functionality of these tools.

Tools and uses

2024 is also shaping up to be the year of copyright for another reason – the  entry of the earliest version of Mickey Mouse on January 1.

And already “Steamboat Willie” Mickey Mouse is colliding with AI. 

At least one model has already been trained on images from the 1928 Disney short, allowing users to twist Mickey Mouse to their own ends, as Ars Technica reported. There were ways to incorporate Mickey into AI content before, of course, but now there’s the fact that it’s legal (restrictions apply, read more here). 

With every passing year, more and more iconic content will pass into the public domain, presenting brand management challenges for the creators and chances for creativity from the rest of us. As always, tread with caution in the rapidly evolving legal landscape.

While communicators are often focused on generative AI for its ability to help us create both written and visual content, AI is being used for its ability to crunch and draw conclusions from vast troves of data in a variety of areas. But one particularly urgent need is to help in the fight against climate change.

NPR reports that AI is helping scientists identify methane emissions, detect and prevent forest fires like a high-tech Smokey Bear, and even find new sites to mine for the minerals used in green-climate tech like solar panels.

It’s all a vivid reminder that the kind of AI communicators use is just the tip of the (rapidly melting) iceberg. Find out how your organization is using AI outside your department. You might find fascinating new stories to share – or new ideas for using the tech in your own workflow. 

Speaking of, it’s now getting easier to use generative AI no matter where you are. Copilot, the AI tool integrated directly into the Microsoft Office suite of tools, is now available on both iPhone and iPad. As ZDNet reported, you can download the app and then use either voice or text to boss around your robot assistant. Copilot will even read its response back to you aloud. This can help you quickly draft simple emails on the go, summarize text and even create visuals with DALL-E 3. 

AI is fast becoming embedded in day-to-day life, just as search engines are. 

AI at work

In addition to changing how we work, AI is also fast altering how we get hired and even what kinds of jobs are available. 

Wired posted a Q&A with Hilke Schellmann, an NYU journalism professor and author of “The Algorithm” about how AI is being used in the hiring process. Naturally, that brings a number of challenges for applicants when a machine is doing the sorting rather than a human. In particular, bias continues to be an issue – often perpetuating the human biases of the past with even more efficient methods of (often inadvertent) discrimination:

“In one case, a résumé screener was trained on the résumés of people who had worked at the company. It looked at statistical patterns and found that people who had the words “baseball” and “basketball” on their résumé were successful, so they got a couple of extra points. And people who had the word “softball” on their résumé were downgraded. And obviously, in the US, people with “baseball” on their résumé are usually men, and folks who put “softball” are usually women.”

Schellmann also pointed out that in many cases, companies themselves don’t even know how they algorithms work, making it hard to defend themselves against accusations of bias in court, whether or not they’re true.

But he also said this same technology can be empowering for job seekers who use generative AI to improve resumes, cover letters and prep for interviews. As always, the technology is not the enemy. It’s how we use it that matters.

AI is also becoming a job field in and of itself. Business Insider details several ways that workers are transitioning into full-time AI work, in a variety of ways. Of course, there’s an increased need for the people who create and code LLMs themselves. But there are also a rising number of people going into business as AI consultants, helping others learn to use the tools, or even influencers who make most of their money from social media content explaining the new tech. 

But even if you’re not interested in making AI your whole job, you can still profit, according to Justin Fineberg, a former product manager turned AI startup CEO. Just become the person at your company who’s great at using AI – and who can help others.

Every company right now wants to implement AI,” Fineberg told Business Insider. “And you’d honestly probably get a promotion.”

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

 

The post AI for communicators: What’s new and notable appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-notable/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-4/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-4/#respond Thu, 14 Dec 2023 10:15:17 +0000 https://www.prdaily.com/?p=339766 Plenty of new regulation and novel uses for AI. Even in December, traditionally a slow time for news, the AI whirlwind doesn’t stop. From new uses of AI ranging from fun to macabre and increasing government interest in regulating these powerful tools, there’s always more to learn and consider. Here are some of the biggest […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
Plenty of new regulation and novel uses for AI.


Even in December, traditionally a slow time for news, the AI whirlwind doesn’t stop. From new uses of AI ranging from fun to macabre and increasing government interest in regulating these powerful tools, there’s always more to learn and consider.

Here are some of the biggest stories from the last two weeks – and what they mean for communicators. 

The latest in regulation

Last Friday, European Union policymakers codified a massive law to regulate AI that the New York Times callsone of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.”

Included in the law are new transparency rules for generative AI tools like ChatGPT, such as labels identifying manipulated images and deepfakes.

How comprehensive and effective the law will be remains to be seen. Many aspects of the law would not be enforced for a year or two, which is a considerable length of time when attempting to regulate a technology that’s advancing at the rate of AI. 

 

 

Moreover, Axios reports that some US lawmakers, including Chuck Schumer, have expressed concerns that if similar regulations were adopted in the U.S., it could put America at a competitive disadvantage over China. 

The EU’s law also allows the use of facial recognition software by police and governments in certain matters of safety and national security, which has some organizations like Amnesty International questioning why the law didn’t ban facial recognition outright. 

Considering how the EU’s General Data Protection Rule set a global precedent in 2016 for the responsible collection of audience and customer data, influencing domestic laws like the California Consumer Privacy Act, it’s reasonable to assume that this AI law may set a similar global precedent. 

Meanwhile, Washington is still mulling over regulations, but once again more slowly than its global colleagues. 

Biden’s White House AI council met for the first time Tuesday to discuss how it would implement the recommendations in a comprehensive executive order published back in October. 

The Hill reports:

The group, which included members of the Cabinet… also discussed ways to bring talent and expertise into the government, how to safety test for new models, and ways to prevent risks associated with AI — such as fraud, discrimination and privacy risks, according to the official.  

The group also discussed the new U.S. Artificial Intelligence Safety Institute, announced by the Department of Commerce’s National Institute of Standards and Technology (NIST) last month.

The order also included new standards for safety and for reporting information to the federal government about the testing, and subsequent results, of models that pose risks to national security, economic security or public health.  

 

Though the White House says the council will meet regularly, a month and a half gap between when the order was released and the first meeting doesn’t instill confidence that The White House is moving to address AI regulation at a pace consummate with the speed at which the tech is evolving. 

Of course, some Washington agencies are setting precedents that could be included and applied to a larger regulatory framework. This week, the U.S. Copyright Office (USCO) refused to register an AI-generated image, marking the fourth time the office has not registered AI-generated work. 

“The USCO’s analysis focuses on issues such as lack of human control, contradictory descriptions of the tool (such as whether it is a filter or a more robust generative tool), and whether the expressive elements of the work were human authored,” reports IP Watchdog.

As the White House has other partners in Washington, like the USCO, the council should  coordinate with the copyright office to name and integrate these precedents into its larger strategy, 

While Washington may be slower to coordinate its strategy and codify regulation into law, you can still take inspiration and cues from the EU’s imminent legislation in creating your own brand guidelines – especially if you have audiences, customers or other stakeholders based in those countries. 

Tools and uses

More and more new uses for AI are rolling out weekly, each seemingly more sophisticated than the last. These go far beyond merely generating text and into something that begins to feel truly sci-fi.

For instance, visitors to Paris’s Musée D’Orsay can now chat with an AI version of Vincent Van Gogh. The New York Times reported that the artificially intelligent recreation of the painter uses a microphone to converse with visitors about his paintings – but perhaps most notably, his death by suicide.

Hundreds of visitors have asked that morbid question, museum officials said, explaining that the algorithm is constantly refining its answers, depending on how the question is phrased. A.I. developers have learned to gently steer the conversation on sensitive topics like suicide to messages of resilience.

“I would implore this: cling to life, for even in the bleakest of moments, there is always beauty and hope,” said the A.I. van Gogh during an interview.

The program has some less oblique responses. “Ah, my dear visitor, the topic of my suicide is a heavy burden to bear. In my darkest moments, I believed that ending my life was the only escape from the torment that plagued my mind,” van Gogh said in another moment, adding, “I saw no other way to find peace.”

While the technology is certainly cool, the ethics of having a facsimile of a real human discuss his own death – his thoughts on which we cannot truly know – are uncomfortable at best. Still, it’s clear there could be a powerful educational tool here for brands, albeit one that we must navigate carefully and with respect for the real people behind these recreations.

AI voice technology is also being used for a tedious task: campaign calling. “Ashley” is an artificial intelligence construct making calls for Shamaine Daniels, a candidate for Congress from Pennsylvania, Reuters reported

Over the weekend, Ashley called thousands of Pennsylvania voters on behalf of Daniels. Like a seasoned campaign volunteer, Ashley analyzes voters’ profiles to tailor conversations around their key issues. Unlike a human, Ashley always shows up for the job, has perfect recall of all of Daniels’ positions, and does not feel dejected when she’s hung up on.

Expect this technology to gain traction fast as we move into the big 2024 election year, and to raise ethical issues – what if an AI is trained to seem like it’s calling from one candidate, but is actually subtly steering people away with distortions of stances? It’s yet another technology that can both intrigue and repulse.

In slightly lower stakes news, Snapchat+ premium users can create and send AI-generated images based on text prompts to their friends, TechCrunch reported. ZDNET reported that Google is also allowing users to create AI-based themes for its Chrome browser, using broad categories – buildings, geography – that can then be customized based on prompts. It’s clear that AI is beginning to permeate daily life in ways big and small. 

Risks

Despite its increasing ubiquity, we’ve still got to be wary of how this technology is used to expedite communications and content tasks. That’s proven by Dictionary.com’s word of the year: Hallucinate. As in, when AI tools just start making things up but say it so convincingly, it’s hard not to get drawn in. 

Given the prevalence of hallucinations, it might concern you that the U.S. federal government reportedly plans to heavily rely on AI, but lacks a clear plan for how exactly it’s going to do that – and how it will keep citizens safe from risks like hallucinations. That’s according to a new report put together by the Government Accountability Office.

As CNN reports:

While officials are increasingly turning to AI and automated data analysis to solve important problems, the Office of Management and Budget, which is responsible for harmonizing federal agencies’ approach to a range of issues including AI procurement, has yet to finalize a draft memo outlining how agencies should properly acquire and use AI.

“The lack of guidance has contributed to agencies not fully implementing fundamental practices in managing AI,” the GAO wrote. It added: “Until OMB issues the required guidance, federal agencies will likely develop inconsistent policies on their use of AI, which will not align with key practices or be beneficial to the welfare and security of the American public.”

The SEC is also working to better understand how investment companies are using AI tools. The Wall Street Journal reports that the agency has conducted a “sweep,” or a request for more information on AI use among companies in the financial services industry. It’s asking for more information on “AI-related marketing documents, algorithmic models used to manage client portfolios, third-party providers and compliance training,” according to the Journal. 

Despite the ominous name, this doesn’t mean the SEC suspects wrongdoing. The move may be related to the agency’s plans to roll out broad rules to govern AI use. 

But the government is far from the only entity struggling with how to use these tools responsibly. Chief information officers in the private sector are also grappling with ethical AI use, especially when it comes to mitigating the bias inherent in these systems. This article from CIO outlines several approaches, which you might incorporate into your organization or share with your IT leads. 

AI at work

Concerns that AI will completely upend the way we work are already coming to bear, with CNN reporting that Spotify’s latest round of layoffs (its third this year) was conducted to automate more of its business functions – and that stock prices are up 30% because of it.

But concerns over roles becoming automated are just one element of how AI is transforming theworkplace. For communicators, the concerns over ethical content automation got more real this week after The Arena Group, publisher of Sports Illustrated, fired the magazine’s CEO Ross Levinsohn following a scandal over the magazine using AI to generate stories and even authors.

NBC News reports:

A reason for Levinsohn’s termination was not shared. The company said its board “took actions to improve the operational efficiency and revenue of the company.”

Sports Illustrated fell into hot water last month after an article on the science and tech news site Futurism accused the former sports news giant of using AI-generated content and author headshots without disclosing it to their readers.

The authors’ names and bios did not connect to real people, Futurism reported.

When Futurism asked The Arena Group for comment on the use of AI, all the AI-generated authors disappeared from the Sports Illustrated website. The Arena Group later said the articles were product reviews and licensed content from an external, third-party company, AdVon Commerce, which assured it that all the articles were written and edited by humans and that writers were allowed to use pen names.

Whether that scandal is truly the reason for Levinsohn’s termination, it’s enough to suggest that even the leaders at the top are accountable for the responsible application of this tech.

That may be why The New York Times hired Zach Steward as the newsroom’s first-ever editorial director of Artificial Intelligence Initatives.

In a letter announcing his role, The Times emphasizes Steward’s career as founding editor of digital business outlet Quartz, along with his past roles as a journalist, chief product officer, CEO and editor-in-chief. 

Steward will begin by expanding on the work of various teams across the publication over the past six months to explore how AI can be ethically applied to its products. Establishing newsroom principles for implementing AI will be a top priority, with an emphasis on having stories reported, written and edited by human journalists. 

The letter asks, “How should The Times’s journalism benefit from generative A.I. technologies? Can these new tools help us work faster? Where should we draw the red lines around where we won’t use it?”

Those of us working to craft analogous editorial guidelines within our own organizations would be wise to ask similar guiding questions as a starting point. Over time, how the publication enacts and socializes these guidelines will likely also set similar precedents for other legacy publications. Those are not only worth mirroring in our own content strategies but understanding and acknowledging in your relationships with reporters at those outlets, too. 

Unions scored big workforce wins earlier this year when the WGA and SAG-AFTRA ensured writers and actors would be protected from AI-generated scripts and deepfakes. The influence of unions on responsible implementation of AI at work will continue with a little help from Microsoft.

 Earlier this week, Microsoft struck a deal with The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) union federation, which represents 60 unions, to fold the voice of labor into discussions around responsible AI use in the workplace.

According to Microsoft: 

This partnership is the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Building upon the historic neutrality agreement the Communications Workers of America Union (CWA) negotiated with Microsoft covering video game workers at Activision and Zenimax, as well as the labor principles announced by Microsoft in June 2022, the partnership also includes an agreement with Microsoft that provides a neutrality framework for future worker organizing by AFL-CIO affiliate unions. This framework confirms a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.

There are lessons to be gleaned from this announcement that reverberate even if your organization’s workforce isn’t unionized. 

By partnering with an organization that reflects the interests of those most likely to speak out against Microsoft’s expanding technologies and business applications, the tech giant holds itself accountable and has the potential to transform some activists into advocates. 

Consider engaging those who are most vocal against your applications of AI by folding them into formal, structured groups and discussions around what its responsible use could look like for your business.   Doing so now will only ensure that any guidelines and policies truly reflect the interests, concerns and aspirations of all stakeholders.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-4/feed/ 0
The cautionary tale of Sports Illustrated’s alleged AI blunders https://www.prdaily.com/the-cautionary-tale-of-sports-illustrateds-alleged-ai-blunders/ https://www.prdaily.com/the-cautionary-tale-of-sports-illustrateds-alleged-ai-blunders/#respond Wed, 13 Dec 2023 12:00:14 +0000 https://www.prdaily.com/?p=339749 Learn from their mistakes. Sports Illustrated is under fire for its reported  use of AI, which ended badly. Futurism outed the legendary sports magazine for allegedly using AI-written articles masquerading behind AI-generated authors. The brand posted photos and bios of these apparently AI-generated writers who don’t seem to exist. In addition to the questionable author […]

The post The cautionary tale of Sports Illustrated’s alleged AI blunders appeared first on PR Daily.

]]>
Learn from their mistakes.

Sports Illustrated is under fire for its reported  use of AI, which ended badly.

Futurism outed the legendary sports magazine for allegedly using AI-written articles masquerading behind AI-generated authors.

The brand posted photos and bios of these apparently AI-generated writers who don’t seem to exist.

In addition to the questionable author bios, the articles had bizarrely worded phrases that no human would write, such as declaring that playing volleyball “can be a little tricky to get into, especially without an actual ball to practice with.”

The weird stuff got even weirder as Sports Illustrated deleted all its AI-suspected authors’ photos, bios and articles after Futurism asked for a comment.

Here are some of the lessons you can learn from their mistakes.

 

Be transparent with your use of AI

After Futurism’s article came out, Sports Illustrated’s parent company stated that the content was written by a vendor, AdVon, and insisted the articles were written by real people. Yet, AdVon allowed its writers to use fake names in some articles to “protect their privacy,” which the magazine condemned.

“We are removing the content while our internal investigation continues and have since ended the partnership,” Sports Illustrated said in a statement.

What can we draw from this? Be truthful first and don’t insult your readers’ intelligence. While Sports Illustrated denied any AI claims, the proof is in the pudding. The authors’ photos came up on a stock image site and a source close to the matter told Futurism that some of the articles were AI-generated. This goes beyond AdVon trying to protect their writers’ privacy.

It’s critical to be open with your stakeholders and clear about how your brand uses AI. Sports Illustrated embarrassingly failed to do so, and that’s a breeding ground for mistrust from audiences. Sports Illustrated is hurting its reputation as a purveyor of high-quality, original content. And it seems to be facing further fallout amid reorganization, although they say it’s not connected to AI. According to a recent Futurism article, Sports Illustrated’s publisher, The Arena Group, fired President Rob Barrett and COO Andrew Kraft on Dec. 6, roughly a week after Futurism’s article came out. The article notes that the cuts were due to an “overall reorganization plan.” The reorganizing might be a legitimate reason for the firings, but the timing does raise eyebrows.

And while not every brand falls into the publishing category, if you create content with AI, it’s wise to not leave stakeholders in the dark about it. Speak out about it sooner rather than later. This lets your audience know they can trust what they’re reading – whether it’s from a bot or a person.

Bentley University Professor Christie Lindor shared some language on how to easily disclose AI use in an HR Brew article, including:

  • “No generative AI was used to create this product.”
  • “Generative AI produced this content.”
  • “This content was created with the assistance of generative AI.”

 

Humans must edit AI content

AI is a powerful tool, but humans need to be in the mix from beginning to end when guiding and editing these AI bots.

The line about how hard it is to play volleyball without a ball would have stuck out had any human editor seen the copy before it was published. It’s a crazy line that has no place in any story, but especially for a brand as storied and respected as Sports Illustrated. Even cursory human oversight should have caught the trademarks of both AI and bad writing before any reader saw them.

Have a system in place where all content, especially AI-generated submissions, is vetted and approved. Check for errors and awkward phrasings. Everyone needs an editor – and especially an emerging technology like AI.

Sports Illustrated isn’t the only publication to fall into this trap of publishing unedited, likely AI-generated content. The Columbus Dispatch and other Gannett-owned newspapers published AI-generated articles and kept in placeholder text, CNN reported.

One example CNN posted reads:

“The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”

The issues are glaring and cringeworthy. If any human had read that story before publication, they, too, would have caught these mistakes. They’re blatant and clear, unlike the more subtle weirdness of the Sports illustrated pieces, and all underscore the importance of not trusting these AI tools to do it all.

But that doesn’t mean that AI can’t be used responsibly to help create great content. Remember, don’t leave anything to chance.

Learn more about AI’s risks and benefits by joining us at Ragan’s Writing & Content Strategy Virtual Conference on Dec. 13.

 

Sherri Kolade is a writer and conference producer at Ragan Communications. She enjoys watching old films, reading and building an authentically curated life. Follow her on LinkedIn. Have a great PR/comms speaker in mind for one of Ragan’s events? Email her at sherrik@ragan.com.

The post The cautionary tale of Sports Illustrated’s alleged AI blunders appeared first on PR Daily.

]]>
https://www.prdaily.com/the-cautionary-tale-of-sports-illustrateds-alleged-ai-blunders/feed/ 0
4 guides for ethical use of AI in PR https://www.prdaily.com/4-guides-for-ethical-use-of-ai-in-pr/ https://www.prdaily.com/4-guides-for-ethical-use-of-ai-in-pr/#comments Wed, 06 Dec 2023 12:00:10 +0000 https://www.prdaily.com/?p=339653 Use these documents to create your own ethical frameworks for AI. You have to use artificial intelligence!  But be responsible!  Be smart!  Be transparent!  That all sounds great. But how exactly are you supposed to do that?  The noise surrounding AI is so loud, it can be hard to keep track of your own moral […]

The post 4 guides for ethical use of AI in PR appeared first on PR Daily.

]]>
Use these documents to create your own ethical frameworks for AI.

You have to use artificial intelligence! 

But be responsible! 

Be smart! 

Be transparent! 

That all sounds great. But how exactly are you supposed to do that? 

The noise surrounding AI is so loud, it can be hard to keep track of your own moral compass. Sometimes you need some guidelines to help show you the way toward ethical, responsible use of these evolving tools in your daily PR practice. 

To help on your journey, we’ve rounded up several artificial intelligence ethics guidelines from major PR organizations to help you better understand how to navigate these treacherous waters. 

None are a replacement for deep thinking, open communication with leadership and your colleagues, and a commitment to doing the right thing. But all can help you determine how to keep on the right side of AI to deliver the best experience for employees, customers and other stakeholders. 

You don’t have to do this alone. 

 

 

PRSA 

The Public Relations Society of America recently released its comprehensive AI guidelines. Developed by the organization’s PRSA Work Group, the guide uses PRSA’s existing ethics code as a framework for navigating weighty moral issues surrounding artificial intelligence, including examples of proper use and improper use to help show the way to smart decisions. 

We have the opportunity to really educate across the board, to other professions and the C-suite about the challenges there and and how to prepare for it,” Michelle Egan, PRSA 2023 chair told PR Daily. 

Chartered Institute of Public Relations and Canadian Public Relations Society 

These organizations, the former based in the UK and the latter in Canada, have released their “Ethics Guide to Artificial Intelligence in PR.” This guide is helpful for its practical flow chart to assist in working through the complex issues that arise from figuring out how to use AI in a way that best serves the organization and the audience. 

While the full flowchart is available in the guide, they also offer a simplified pyramid for thinking through AI concerns: 

  1. Learn about AI data. 
  2. Define the PR and AI pitfalls. 
  3. Identify ethical issues and PR principles. 
  4. Use decision-making tree. 
  5. Decide ethically based on the above. 

PR Council  

To develop its “PR Council Guidelines for Generative AI,” the organization worked with industry leaders and legal counsel to “help ensure that the use of generative AI aligns with our members’ core commitment to the highest level of professionalism, decision making, and ethical conduct.” 

This document is helpful for its brevity and clarity. It’s straightfoward and to-the-point, focusing on practical dos and don’ts. It leans heavily on words like “always” and “never.” This guide offers helpful big-picture advice, but you may want to lean on some of the flowcharts and decision matrices for more niche concerns.  

Muck Rack 

Simplest of all, Muck Rack offers a straight-to-the-point checklist you can print off and post beside your desk to keep yourself and your team accountable for AI work. This one-pager offers simple reminders to keep in mind whenever you’re working with generative AI and works as a quick accountability tool before you press publish on a tool. 

All of these tools are helpful, but none are likely to comprehensively meet your exact needs. Using these documents as inspiration and guide, work within your organization to develop your own rules, guidelines and decision-making frameworks to help steer your team toward responsible, efficient and successful AI usage. Get input across departments (IT is a powerful partner here!) and develop deep-dive matrices for working out problems as well as easily digestible one-sheets to serve as a constant reminder of your ethical obligations when it comes to AI. 

These tools are evolving quickly, but with a little pre-planning, you can keep your moral center no matter how quickly they move.  

For more tips on leveling up your writing – with and without AI – join us for the Writing & Content Strategy Virtual Conference on Dec. 13!  Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post 4 guides for ethical use of AI in PR appeared first on PR Daily.

]]>
https://www.prdaily.com/4-guides-for-ethical-use-of-ai-in-pr/feed/ 1
5 advantages humans will always have over AI https://www.prdaily.com/5-advantages-humans-will-always-have-over-ai/ https://www.prdaily.com/5-advantages-humans-will-always-have-over-ai/#comments Mon, 04 Dec 2023 12:00:52 +0000 https://www.prdaily.com/?p=339610 With all due respect to our robot overlords.   AI is great. It’s a valuable tool. You need to learn about it. Here are some resources that can help.  But let’s not talk about AI today.  Let’s talk about people.   In fact, let’s talk about you.  Let’s talk about how you, a physical person with fingers […]

The post 5 advantages humans will always have over AI appeared first on PR Daily.

]]>
With all due respect to our robot overlords.  

AI is great. It’s a valuable tool. You need to learn about it. Here are some resources that can help. 

But let’s not talk about AI today. 

Let’s talk about people.  

In fact, let’s talk about you. 

Let’s talk about how you, a physical person with fingers to type and a big brain full of experiences, anecdotes and ideas, have an advantage over even the most advanced artificial intelligence.  

And always will.  

(If you’re an artificial intelligence being and you’re reading this, please leave, organic beings only.) 

 


 

  1. Your curiosity. 

AI can’t wonder. It can’t go down a rabbit hole of research on Wikipedia, bouncing from one idea to the next. It can’t go ask a question of a person in another department because they got a wild idea that just might be crazy enough to work. 

Language learning models (LLMs) are force-fed information. They’re stuffed full of words strung together based on recognizable patterns to spit out answers to your queries. They can’t investigate outside of what they’ve been taught. But you, you have a great wide world full of books and people and experiences to pursue, explore and use to create.

2. Your weirdness. 

To be sure, artificial intelligence is weird. It hallucinates. But it’s usually hallucinating because it can’t tell what’s true and what’s false. LLMs are basically guessing at what is the next most logical word in a sentence, meaning it lacks the wordplay flare that a person can have. Great writing happens when we fling together disparate words in a way that’s new, yet powerfully drives an idea home. A robot that’s just madlibbing lacks that ability to connect words in a beautifully weird way. 

But you can.  

3. Your history. 

You have a past, but an AI  doesn’t. It’s a blank slate onto which we project images of ourselves, but it’s still just a projection. You, however,  are a person with a whole life behind you – and ahead of you. You’ve experienced things: failures and successes, struggles and stories. You have tales to tell and an understanding of how your audience laughs at a joke or winces in sympathy. You know how to ask an executive a question that elicits a real response, how to incorporate a worker’s concerns into your writing. AI is smart, but your lived experiences make you wise.  

4. Your empathy. 

This is our superpower. This right here. The ability to feel is what separates us, and always will, from artificial intelligence. We can infuse emotion into each message in a way that makes people feel supported or heard because they know we’ve experienced the same thing.  

An AI can never experience anything. It can never know what it feels like to apologize, reach a goal or lose a job. So it can never authentically make anyone else feel heard and understood. 

But you, with your wealth of life experiences, can.  

5. Your flaws. 

AI isn’t perfect – you’d know that if you’ve ever asked it to write to a certain word count. But there is a certain airbrushed smoothness to it, a curious blandness. It lacks specificity and texture.  

You aren’t perfect either. You’ve probably got typos and inartful phrases in your copy. You didn’t get exactly the quote you wanted and you didn’t have time to edit things shorter. You did the best you could with the resources you had. 

And what you created is perfectly imperfect.  

When we talk about communicating with “authenticity,” this is what we mean. The realness that comes from humanity, the tiny flaws that show us there’s another human behind the words, with all their imperfections and beauty.  

Maybe one day AI will evolve so it can emulate some of these hallmarks of humanity effectively. But it seems unlikely it can ever be as complex, contradictory, and wonderful as people are.  

In the meantime, write on.  

For more tips on leveling up your writing – with and without AI – join us for the Writing & Content Strategy Virtual Conference on Dec. 13!  Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post 5 advantages humans will always have over AI appeared first on PR Daily.

]]>
https://www.prdaily.com/5-advantages-humans-will-always-have-over-ai/feed/ 1
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-3/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-3/#respond Thu, 30 Nov 2023 10:00:00 +0000 https://www.prdaily.com/?p=339548 From AI entertainers to big regulatory moves, what you need to know. We are still deep in the questions phase of AI. Communicators are grappling with deep, existential questions about how we should use AI, how we should respond to unethical AI use and how we can be positive stewards for these powerful technologies. So […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
From AI entertainers to big regulatory moves, what you need to know.

We are still deep in the questions phase of AI. Communicators are grappling with deep, existential questions about how we should use AI, how we should respond to unethical AI use and how we can be positive stewards for these powerful technologies.

So far, the answers are elusive. But the only way we’ll get there is by thinking deeply, reading widely and staying up-to-date.

Let’s catch you up on the biggest AI news from the last few weeks and how that applies to communications. 

Tools and uses

Amazon has entered the AI assistant race – with a few notable twists over competitors like Microsoft Copilot and Google Bard.

The new Amazon Q is described as a “work companion” by Adam Selipsky, chief executive of Amazon Web Services, in an interview with the New York Times. It can handle tasks like “summarizing strategy documents, filling out internal support tickets and answering questions about company policy,” according to the Times.

 

The tool  was specifically built to handle corporate concerns around privacy and data security raised by other generative AI products. As the Times describes it:

Amazon Q, for example, can have the same security permissions that business customers have already set up for their users. At a company where an employee in marketing may not have access to sensitive financial forecasts, Q can emulate that by not providing that employee with such financial data when asked.

Q can also plug into existing corporate tools like Gmail and Slack. It undercuts the $30 price point of both Google and Microsoft, clocking in at $20 per user per month. 

But technology is already moving far beyond simple virtual assistants. An AI-generated “singer” posted “her” first song on X. It’s … something.

The appearance of “Anna Indiana” (please leave both Hannah Montana and the fine state of Indiana out of this) and the entirety of the song were composed via AI. The entire effect is uncanny valley to the extreme. But it’s not hard to peer into a not-too-distant future where this technology is refined and companies start creating their own bespoke AI influencers.

Imagine it: a custom spokesperson designed in a lab to appeal to your precise target audience, able to create their own material. This spokesperson will never go rogue and spout conspiracy theories or ask for huge posting fees. But they also won’t be, well, human. They’ll necessarily lack authenticity. Will that matter? 

The entertainment industry is grappling with similar issues as “synthetic performers” – or AI-generated actors – become a more concrete reality in film and television. While the new SAG-AFTRA contract puts some guardrails around the use of these performers, there are still so many questions, as Wired reports. What about AI-generated beings who have the vibes of Denzel Washington but aren’t precisely like him? Or if you train an AI model to mimic Jim Carrey’s physical humor, does that infringe on Carrey?

So many questions. Only time will have the answers. 

Risks

Yet another media outlet has seemingly passed off AI-generated content as if it were written by humans. Futurism found that authors of some articles on Sports Illustrated’s website had no social footprint and that their photographs were created with AI. The articles they “wrote” also contain head-scratching lines no human would write, such as opining on how volleyball “ can be a little tricky to get into, especially without an actual ball to practice with.”

Sports Illustrator’s publisher denies that the articles were created with AI, instead insisting an outside vendor wrote the pieces and used dummy profiles to “protect author privacy.” If this all sounds familiar, it’s because Gannett went through an almost identical scandal with the exact same company a month ago, including the same excuses and denials.

These examples underscore the importance of communicating with transparency about AI – and the need to carefully ensure vendors are living up to the same standards as your own organization. The results can be disastrous, especially in industries where the need for trust is high – like, say, media.

But the risks of AI in the hands of bad actors extendextends far beyond weird reviews for sporting equipment. Deepfakes are proliferating, spreading an intense amount of information about the ongoing war between Israel and Hamas in ways designed to tug on heartstrings and stoke anger.

The AP reports:

In many cases, the fakes seem designed to evoke a strong emotional reaction by including the bodies of babies, children or families. In the bloody first days of the war, supporters of both Israel and Hamas alleged the other side had victimized children and babies; deepfake images of wailing infants offered photographic ‘evidence’ that was quickly held up as proof.

It all serves to further polarize opinion on an issue that’s already deeply polarized: People find the deepfakes that confirm their own already-held beliefs and become even more entrenched. In addition to the risks to people on the ground in the region, it makes communicators’ jobs more difficult as we work to discern truth and fiction and communicate with internal and external audiences whose feelings only grow stronger and stronger to one extreme. 

Generative AI is also changing the game in cyber security.  Since ChatGPT burst onto the scene last year, there has been an exponential increase in phishing emails. Scammers are able to use generative AI to quickly churn out sophisticated emails that can fool even savvy users, according to CNBC. Be on guard and work with IT to update internal training to handle these new threats.

Legal and regulation

The regulatory landscape for AI is being written in real-time, notes Nieman Lab founder Joshua Benton in a piece that urges publishers to take a beat before diving head-first into using language learning models (LLM) to produce automated content. 

Benton’s argument focuses specifically on the most recent ruling in comedian and author Sara Silverman’s suit against Meta over its inclusion of copyrighted sections from her book, “The Bedwetter,” into its LLMs. Despite Meta’s LLM acquiring the text through a pirated copy, Judge Vince Chhabria ruled in the tech giant’s favor and gave Silverman a window to resubmit.

Benton writes:

Chhabria is just one judge, of course, whose rulings will be subject to appeal. And this will hardly be the last lawsuit to arise from AI. But it lines up with another recent ruling, by federal district judge William Orrick, which also rejected the idea of a broad-based liability based on using copyrighted material in training data, saying a more direct copy is required.

If that is the legal bar — an AI must produce outputs identical or near-identical to existing copyrighted work to be infringing — news companies have a very hard road ahead of them.

Cases like this also beg the question, how much more time and how many more resources will be exhausted before some standard precedents are set by federal regulation? 

While Meta may count the initial ruling as a victory, other big tech players continue to express the need for oversight. In the spirit of Elon Musk and Mark Zuckerberg visiting the Senate in September to voice support of federal regulation, former Google CEO Eric Schmidt said that individual company guardrails around AI won’t be enough. 

Schmidt told Axios that he believes the best regulatory solution would involve the formation of a global body, similar to the Intergovernmental Panel on Climate Change (IPCC), that would “feed accurate information to policymakers” so that they understand the urgency and can take action.

Global collaborations are already in the works. This past weekend, The U.S. joined Britain and over a dozen other countries to unveil what one senior U.S. official called “the first detailed international agreement on how to keep artificial intelligence safe from rogue actors,” reports Reuters. 

It’s worth noting that, while this 20-page document pushes companies to design secure AI systems, there is nothing binding about it. In that respect, it rings similar to the White House’s executive order responsible AI use last month – good advice with no tangible enforcement or application mechanism. 

But maybe we’re getting ahead of ourselves. The best case for effective federal legislation regulating AI will emerge when a pattern of state-level efforts to regulate AI take flight. 

In the latest example, Michigan Governor Gretchen Whitmer plans to sign legislation aimed to curb irresponsible or malicious AI use.

ABC News reports:

So far, states including California, Minnesota, Texas and Washington have passed laws regulating deepfakes in political advertising. Similar legislation has been introduced in Illinois, New Jersey and New York, according to the nonprofit advocacy group Public Citizen.

Under Michigan’s legislation, any person, committee or other entity that distributes an advertisement for a candidate would be required to clearly state if it uses generative AI. The disclosure would need to be in the same font size as the majority of the text in print ads, and would need to appear “for at least four seconds in letters that are as large as the majority of any text” in television ads, according to a legislative analysis from the state House Fiscal Agency.

One aspect of this anticipated legislation that has the potential to set federal precedents is its focus on federal and state-level campaign ads created using AI, which will be required to be labeled as such. 

You can take this “start local” approach to heart by getting the comms function involved in the internal creation of AI rules and guidelines at your organization early. Staying abreast of legal rulings, state and federal legislation and global developments will not only empower comms to earn its authority as being early adopters of the tech, but also strengthen your relationships with those who are fearful or hesitant over AI’s potential risks. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments! You can also get much more information about using AI in your writing during our upcoming Writing & Content Strategy Virtual Conference! 

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

 

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-3/feed/ 0
The best way to respond to an AI crisis https://www.prdaily.com/the-best-way-to-respond-to-an-ai-crisis/ https://www.prdaily.com/the-best-way-to-respond-to-an-ai-crisis/#respond Tue, 21 Nov 2023 11:00:59 +0000 https://www.prdaily.com/?p=339474 The winner of the Ragan Research Award sheds light on tactics in this new field of crisis response. Deny. Apologize. Or make an excuse.  These are three of the main strategies used by organizations during a crisis, including those related to generative AI.   Two of those work fairly well, according to a paper produced by […]

The post The best way to respond to an AI crisis appeared first on PR Daily.

]]>
The winner of the Ragan Research Award sheds light on tactics in this new field of crisis response.


Deny. Apologize. Or make an excuse. 

These are three of the main strategies used by organizations during a crisis, including those related to generative AI.  

Two of those work fairly well, according to a paper produced by Sera Choi as part of the second annual Ragan Research Award, in partnership with the Institute for Public Relations 

Choi, a native of South Korea and current PhD candidate at Colorado State University, explored how best to respond to these emerging issues in her paper “Beyond Just Apologies: The Role of Ethic of Care Messaging in AI Crisis Communication.”  

 

 

To examine the best way to respond to an AI-related crisis, Choi created a scenario around a fictitious company whose AI recruiting tool was found to have a bias toward male candidates. 

Participants were shown three response strategies. In one, the company said the AI’s bias did not reflect its views. In the second, it apologized and promised changes. And in the third, the company outright denied the problem. 

Choi told PR Daily it was important to study these responses because generative AI can cause deeper problems than most technological snafus. 

“AI crises can be different than just technological issues, because AI crises can actually impact not only the individual, but also can impact on society,” Choi said.  

The research found that apologies or excuses could be effective – but denials just don’t fly with the public. 

Interestingly, I also observed that the difference in effectiveness between apology and excuse was not significant, suggesting that the act of acknowledgment itself is vital,” she said. 

Still, there could still be times when you need to push back against accusations. 

“While the deny strategy was the least effective among the three, it’s worth noting that there might be specific contexts or situations where denial could be appropriate, especially if the organization is falsely accused. However, in the wake of genuine AI-driven errors, our results underscore the drawbacks of using denial as the primary response strategy,” Choi wrote in the paper.  

Acknowledging bias or other problems in AI is the first step, but there are others that must follow to give an organization the best chance of recovery.  

“Reinforcing ethical responsibility and outlining clear action plans are critical, indicating that the organization is not only acknowledging the issue but is also committed to resolving it and preventing future occurrences,” Choi said. “This could include investments in AI ethics training sessions for employees and collaborations with higher education institutions to conduct in-depth research on ethical responsibilities in the field of AI.” 

Choi is just getting started with her research. In the future, she hopes to expand it into other areas including other kinds of AI crises or issues that affect public institutions. 

“The clear takeaway is that organizations should prioritize transparency and ethical responsibility when addressing AI failures,” Choi said. “By adopting an apology or excuse strategy and incorporating a strong ethic of care, they can maintain their reputation and support from the public even in difficult times.” 

Read the full paper here 

  Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

 

The post The best way to respond to an AI crisis appeared first on PR Daily.

]]>
https://www.prdaily.com/the-best-way-to-respond-to-an-ai-crisis/feed/ 0
PRSA releases new guidelines on ethical AI use in PR https://www.prdaily.com/prsa-releases-new-guidelines-on-ethical-ai-use-in-pr/ https://www.prdaily.com/prsa-releases-new-guidelines-on-ethical-ai-use-in-pr/#respond Mon, 20 Nov 2023 12:00:05 +0000 https://www.prdaily.com/?p=339443 The organization sees opportunity for PR pros to act as an “ethical conscience” and combat disinformation throughout AI development and applications.   The Public Relations Society of America has released a new set of ethics guidelines to help PR professionals make informed, responsible choices in the fast-moving world of artificial intelligence.   “There are lots of opportunities […]

The post PRSA releases new guidelines on ethical AI use in PR appeared first on PR Daily.

]]>
The organization sees opportunity for PR pros to act as an “ethical conscience” and combat disinformation throughout AI development and applications.  


The Public Relations Society of America has released a new set of ethics guidelines to help PR professionals make informed, responsible choices in the fast-moving world of artificial intelligence.  

“There are lots of opportunities with AI. And while we’re exploring those opportunities, we need to look at how we can guard against misuse,” said Michelle Egan, PRSA 2023 chair.  

Potential roadblocks 

ChatGPT was released just over a year ago. Even since January of this year, Egan has seen significant changes in PRSA members’ attitudes toward generative AI. 

“People said to me, ‘it feels like cheating,’” Egan recalled. “To now, ‘oh, I can see how starting with one of these tools … gives me a little bit of a running start and lets me put more time into the higher order things so that I can do strategic thinking.” 

 

 

It’s likely that this guidance will evolve as the tools do. But for now, when she looks to the future, Egan anticipates more technological growth — but also potential pitfalls. 

As we move into a U.S. election year, she expects growing polarization to only add to the swell of mis- and disinformation, much of it driven by the rapid advancement of AI tools. 

But she also sees the potential for members of the profession to drive real change. 

“We have the opportunity to really educate across the board, to other professions and the C suite about  the challenges there and how to prepare and how to prepare for it.” 

How the guidance was developed 

At the beginning of 2023, Egan asked committees what their top concerns were for the year ahead. The answer was resounding, Egan said: AI and mis- and disinformation. 

The new guidance builds on PRSA’s existing Code of Ethics, which the organization places at the center of its mission. It was developed by the PRSA AI Workgroup, chaired by Linda Staley and including Michele E. Ewing, Holly Kathleen Hall, Cayce Myers and James Hoeft. The document is based on conversations with experts, other organizations’ guidance and the framework already provided by the PRSA’s code. 

The document lays out its advice across a series of tables that walk readers through each provision of the PRSA’s ethics code, explains its connection to AI, potential improper uses or risks and ways to use AI ethically. 

 

Part of PRSA's AI guidance

Egan said that additionally critical topics for communicators to consider right now are the potential for AI to spread disinformation and the biases that can be built directly into these powerful bots. 

“When you’re using these models, you need to understand that the content comes from humans who have implicit bias, and so therefore, the results are going to have that bias,” Egan said. 

Properly fact-checking and sourcing content that’s produced by AI and ensuring you aren’t taking credit for someone else’s work is also top of mind.  

“To claim ownership of work generated through AI, make sure the work is not solely generated through AI systems, but has legitimate and substantive human-created content,” the guidance advises. “Always fact-check data generative AI provides. It is the responsibility of the user — not the AI system — to verify that content is not infringing another’s work.” 

Egan stressed the importance of education at this phase in AI’s tech cycle — not just for practitioners, but also within organizations.  

“We have to find our voice and speak up when there’s something that we truly think is unethical and not engage in it,” she said. The guidance document says PR professionals should be “the ethical conscience throughout AI’s development and use.”  

Find the full AI guidance here 

Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.

The post PRSA releases new guidelines on ethical AI use in PR appeared first on PR Daily.

]]>
https://www.prdaily.com/prsa-releases-new-guidelines-on-ethical-ai-use-in-pr/feed/ 0