AI in June: Influence Ops, Education & Innovation

AI Influence Operations

In June, OpenAI released a report detailing its actions against influence operations from Russia, China, Iran, and Israel that utilized AI tools to manipulate public opinion. These operations used OpenAI’s tools, including ChatGPT, to create content, fake accounts, images, and debug code. Despite these efforts, the AI-driven content failed to gain significant engagement. Ben Nimmo, principal investigator at OpenAI, noted that AI tools improve content production but struggle with distribution. OpenAI banned accounts linked to five covert operations, including Russia’s Doppelganger and China’s Spamouflage. Nimmo stressed the need for vigilance, warning that influence operations could succeed if not actively monitored.

 

ChatGPT Edu for Universities

OpenAI announced a new version of ChatGPT, called ChatGPT Edu, designed specifically for universities. Launched last week, this tool leverages the advanced GPT-4o model, capable of processing text, audio, and images in real-time. The initiative builds on successful partnerships with institutions like the University of Oxford and Arizona State University. ChatGPT Edu aims to enhance educational and operational frameworks by offering features like data analytics, document summarization, and enterprise-level security, all at an affordable price for educational institutions. "Integrating OpenAI’s technology into our frameworks accelerates transformation at ASU," said Kyle Bowen, Deputy CIO at Arizona State University.

 

Apple Intelligence Unveiled

At WWDC 2024, Apple announced "Apple Intelligence," a suite of new AI features for iPhone, Mac, and more, set to roll out later this year. These features include a more conversational Siri, custom AI-generated "Genmoji," and integration with OpenAI's GPT-4o for enhanced chatbot capabilities. Available on iPhone 15 Pro, 15 Pro Max, and M1 or later devices, these features will debut with iOS 18, iPadOS 18, and macOS Sequoia. Apple's new AI capabilities enable Siri to perform actions across apps, manage notifications, and summarize text. Siri will also have "onscreen awareness" to better understand user requests and can type as well as talk. Privacy remains a priority, with many functions processed on-device and complex requests handled via Apple's "Private Cloud Compute," ensuring user data is not stored or accessed by Apple servers. Additionally, the Photos app will see enhancements, including improved object search and features similar to Google's Magic Eraser.

 

AI Faces Data Shortage

Researchers are raising alarms about the rapid depletion of human-written training data essential for improving AI models developed by companies like OpenAI and Google. Without a continuous influx of new data, these AI systems may reach a performance plateau, threatening the growth of the AI industry. "There is a serious bottleneck here," says Tamay Besiroglu, lead author of a new study. The issue is compounded by lawsuits from publishers, such as the New York Times, against AI companies for copyright infringement. The volume of text data used for training is increasing at 2.5 times per year, but computing capabilities are growing even faster. This mismatch could lead to a scarcity of fresh data by 2026, pushing AI firms to consider using AI-generated data for training—a move experts warn could degrade AI quality over time.

 

Contrasting AI Strategies: Profit vs. Safety

OpenAI is considering transitioning to a for-profit benefit corporation, potentially paving the way for an IPO. This shift aims to enable OpenAI to better compete in the AI industry, where peers like Anthropic and xAI have already adopted similar structures. OpenAI is also expanding its lobbying efforts to navigate increasing regulatory scrutiny. In contrast, Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched Safe Superintelligence Inc. (SSI), a new AI company dedicated to developing superintelligence with a strict focus on safety. Co-founded by former Apple AI lead Daniel Gross and former OpenAI technical staff member Daniel Levy, SSI aims to balance safety and capabilities without the external pressures faced by AI teams at major companies. Sutskever emphasized that SSI will not engage in partnerships or other projects until their safe superintelligence product is fully realized.

 

Major Labels Sue AI Music Firms

Major music labels, including Sony, Warner, and Universal, have filed lawsuits against AI music firms Suno and Udio, accusing them of massive copyright infringement. Spearheaded by the Recording Industry Association of America (RIAA), the lawsuits claim that these companies have copied decades of copyrighted sound recordings to train their AI models, thereby creating machine-generated music that competes with genuine recordings. The legal action seeks to halt the training practices and demands damages for the infringements. Suno and Udio, prominent in the generative AI music field, have faced criticism for producing outputs that closely mimic well-known songs and artists. Suno's CEO, Mikey Shulman, defended the company's practices, emphasizing their mission to generate original music and accusing the labels of avoiding good-faith discussions. The lawsuits highlight the tension between innovative AI technologies and the protection of intellectual property rights in the music industry.

 
Previous
Previous

AI in July: 2024 Paris Olympics Embrace AI

Next
Next

May Roundup: AI Firsts, Tech Giants Race, Ethical Issues Faced