May, 2023 - Sam Altman’s Testimonial & AI Regulation - What B2B Leaders Should Know Before They Integrate GenAI, LLMs & ChatGPT Into Products & Processes

Read Time:
minutes

Since the AI chatbot ChatGPT was released on November 30, 2022, it has become the fastest-growing app in history, with 100 million users by January 2023. This has made OpenAI CEO Sam Altman a known name in the tech industry. 

Its impressive capabilities and explosive growth have prompted worry and concern over various issues, including the possibility that it will be used to spread disinformation and the chance that it will eliminate jobs faster than it creates new ones. 

That's why on May 16—before his company had even come close to anything like Facebook's Cambridge Analytica scandal—Altman found himself testifying before the US Congress. After the world realized the impact of Facebook's Mark Zuckerberg, Twitter's Jack Dorsey, and Google's Sundar Pichai on various societal issues, these tech titans made their Congressional debuts. 

Sneak peek of the upcoming topics

  • The historic opening of the testimonial 
  • A short intro bout Sam Altman
  • Can AI control the election?
  • AI and the future of job market
  • AI and creators’ (copy)right
  • AI overtaking the News industry?
  • Then what next, Sam?

Sam Altman and Senate Testify 

Sen. Richard Blumenthal (D-Connecticut) admitted that Congress is having trouble regulating social media meaningfully. He stated, "Congress did not rise to the occasion on social media." To paraphrase, they have an obligation to do it on AI before the threats and risks become real.

After Trump's surprise victory over Clinton in 2016, many Democrats blamed the spread of fake news on social media sites like Twitter and Facebook. Conservatives have claimed that Facebook, Twitter, YouTube, and other platforms are "shadow banning" them and removing their content because it is too conservative.

Putting aside political disagreements, it is now evident that social media is bad for youth, promotes the spread of bigoted views, and causes anxiety and isolation among its users. In the case of social media companies, which have become fixtures of the business and cultural landscapes, it may be too late to address those concerns. However, Congress members agreed that precautions can still be taken to prevent artificial intelligence from spawning the same social ills.
That is why artificial intelligence’s rapid growth is concerning them.
Altman anticipated the senators' disappointment in themselves for missing the opportunity to regulate social media and instead presented artificial intelligence as a new phenomenon that would prove far more transformative and beneficial than a constant stream of cat memes (or, for example, racist messages).

Noting that "this is not social media," he emphasized the distinction.  "This is different."

Sam Altman Vs Us Congress

A lil bit about Sam Altman

The 38-year-old Altman was born and raised in St. Louis, where he also developed an early interest in computer programming. He enrolled at Stanford University but left in 2005 to launch Loopt, a pioneering social media app that was popular before smartphones became widely available. Seven years and an admission that his first attempt at entrepreneurship was a bust later, he sold the company.

After graduating, he started working at Y Combinator, a startup business incubator. Working there positioned him at the epicenter of the booming venture capital industry, where he met and learned from industry titans such as Reid Hoffman (founder of LinkedIn), Peter Thiel (co-founder of Palantir), and Paul Graham (venture capitalist). He went on to lead Y Combinator and establish himself as a successful investor and advisor to budding entrepreneurs.

Altman has dabbled in politics; in 2018, he considered running for governor of California, and in 2019, he hosted a fundraiser for Andrew Yang, an outsider candidate for the Democratic nomination for president. He helped President Biden's campaign financially.

Sam Altman’s Testimonial & AI Regulation

The Historical Opening of the Hearing

Blumenthal, chairman of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, introduced OpenAI CEO Sam Altman, an AI creation. Blumenthal then played an audio tape of himself making "introductory remarks" in his own voice, created by AI. He admitted that the recording was produced by voice cloning software and that the text was authored by Sam Alkman's artificial intelligence chatbot, ChatGPT. 

"That voice was not mine," Blumenthal claimed that he did not write such lines. 

He remarked, "I asked ChatGPT, ' why did you pick those themes and that content?'" And it said, and I quote, "Blumenthal has a solid record in campaigning for consumer protection and civil rights. He has spoken out on topics including data privacy and the possibility of bias in algorithmic decision-making. As a result, the statement places an emphasis on these factors. Jokingly thanking Altman for the "endorsement," Blumenthal then cautioned of the potential risks of Generative AI. 

All the hotcakes they covered!

Sam Altman and OpenAI regulation

AI: The job eater?

The idea of robots taking over the world has been done to death in science fiction. However, some employees are starting to worry that this is the case.

The widespread adoption of ChatGPT and similar generative AI tools has raised concerns among workers that they will be used to replace them in their jobs.

The truth is that these innovations have already begun to permeate corporate settings. According to a survey of almost 4,000 U.S. adults conducted by The Harris Poll on behalf of Fortune, 74% of employed Americans familiar with ChatGPT have used the technology for work-related tasks, despite OpenAI only launching ChatGPT in November 2022.

And that's only going to grow, as 56% of workers say they've had internal discussions about implementing ChatGPT.

It's only a matter of time before these problems disappear, despite the current iterations frequently providing incorrect information and stodgy responses to creative prompts. That worries Americans, although opinions on the outcome are mixed.

Harris found that only about 40% of workers familiar with ChatGPT are worried that the AI chatbot will completely replace their jobs. In comparison, 60% are hopeful that generative AI will make them more productive in their current roles. About 38% of workers are concerned that technology will make them obsolete or less effective.

One major area of concern regarding this type of AI is its potential impact on future employment. Four in ten Americans are concerned that ChatGPT will hinder their ability to find employment. More than seven in ten people are concerned that artificial intelligence will displace jobs requiring skills like data entry and processing, media and communications, coding, and even human resource management.

When asked what his "biggest nightmare" was, the CEO mentioned the loss of jobs. 

Although Altman recognized the potential for significant job impacts due to technological revolutions, he expressed the difficulty of accurately predicting the exact nature of these impacts. He said he is confident that even though some jobs will be lost, many better ones will be created. It's easy to understand GPT-4's capabilities, so Altman stressed the importance of seeing it as a tool rather than a sentient being. He claimed that people can choose how they use the device to a large extent. Altman explained that GPT-4 and similar systems are superior at narrow tasks but must be more capable of entirely replacing human labor.

However, the timescale over which new opportunities can cover immediate job losses is still being determined, as noted by Gary Marcus, an AI expert and Professor Emeritus of Psychology and Neuroscience at New York University. 

He stated, "Past performance history is not a guarantee of the future." This implies that predicting future results based on past performance takes a lot of work. As Altman pointed out, technological progress has always created new occupations and employment opportunities. He did, however, think that the effects of AI on the labor market might be quite different from those predicted by conventional wisdom. The most pressing question was how long this effect would take to materialize, whether ten years, a century, or longer. Altman emphasized that nobody knows the answer to this question.

In a recent interview, Marcus discussed his opinion that "artificial general intelligence" will eventually replace a large number of human jobs. " We have a long way to go before we can create truly intelligent machines, despite all the talk in the media.The AI we have now is only a taste of what will be possible in the next two decades." While noting the significant progress still needed, Marcus emphasized the potentially transformative impact of artificial general intelligence on the job market.

IBM's Montgomery recognized the clear dangers that AI posed to people across many sectors, even those who held positions previously thought to be secure from automation.

"Some jobs will transition away," she remarked.
Referring to OpenAI's most recent generative AI model, GPT4, he said, "I think it's important to understand and think about GPT4 as a tool, not a creature." These models, he argued, were "good at doing tasks, not jobs," meaning they would facilitate human labor without displacing it.

Senate on OpenAI regulation

Can AI create election disinformation?

For years, tech-savvy political scientists and computer engineers have warned that anyone could soon use low-cost, high-powered artificial intelligence tools to create convincing fake images, videos, and audio.

Compared to other forms of disinformation easily spread on social media, the newly developed synthetic images could have been more varied, unconvincing, and expensive. AI and so-called deep fakes have long seemed like a threat that was at least two years away. 

Modern generative AI tools enable it to generate cloned human voices and incredibly lifelike photos, videos, and audio in seconds and at a meager cost. This potentially shady campaign tactic takes on a whole new level when combined with the widespread reach and pinpoint accuracy of social media's algorithmic distribution systems.

The implications for the campaigns and elections in 2024 are significant and concerning. Generative AI has the potential to mislead voters, impersonate candidates, and undermine elections at a scale and speed never before seen. It can also produce targeted campaign emails, texts, and videos at lightning speed.

Robocalls recorded to sound like a candidate telling voters to vote on the wrong date, recordings purporting to show a candidate admitting to a crime or expressing racist views, and videos purporting to show someone giving a speech or interview they never gave are all examples of such tactics. Photoshop forgeries that purport to be from local news outlets and falsely report that a candidate has withdrawn from the election.

Prof. Gary Marcus voiced concern, saying that the potential dangers of new AI systems could outweigh their potential benefits. He stressed the potentially destabilizing nature of such systems, which could produce convincing falsehoods at an unprecedented scale. This technology directly threatens democracies because outsiders can use it to influence elections and insiders to manipulate markets and political systems. 

Prof. Marcus also voiced concern that chatbots' covert impact on public opinion may exceed that of social media. He stressed that AI companies' decisions about which data sets to use would have profound, unobservable effects on the direction of society. In his final remarks, Prof. Marcus emphasized additional risks related to AI technology.

Altman said OpenAI had banned the use of ChatGPT for "generating high volumes of campaign materials" to counteract these dangers.

Sam Altman's hearing

The Artist and Creators: Is AI Ready for Copyright Fight?

The next battleground in the war between artists and computers was set in motion in March when Universal Music Group sent an email to Spotify, Apple Music, and other streaming services demanding that they stop artificial intelligence companies from using its labels' recordings to train their machine-learning software. Many experts in the music industry, including Warner Music Group, HYBE, ByteDance, Spotify, and a slew of startups, are calling for legal protection from developers who exploit their work without permission to train AI algorithms. Meanwhile, developers are trying to find places where they may operate free from governmental oversight.

Members of the committee couldn't hold back their desire to lighten the mood of the otherwise solemn session. Cory Booker and Jon Ossoff's bromance started when they complimented each other's looks and intellect in the Senate. A self-deprecating Senator, Peter Welch, said, "Senators are noted for their short attention spans, but I've sat through this entire hearing and enjoyed every minute of it."

Blackburn was concerned about who would own a song written in the manner and voice of her favorite singer, Garth Brooks, and asked Altman about OpenAI's automated music tool Jukebox and copyright law. 

Even more concerning to Hinono were deepfake songs that sounded like her favorite band BTS. Blackburn and Hinono made some interesting comments about intellectual property, despite the strangeness of bringing up the famous country singer and K-Pop sensation in this setting.

With the help of machine learning, AI models are learning to replicate the elements that make songs catchy, such as the verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop, and the meteoric drop that describes electronic dance. Whether they learn music formally or not, human artists will pick up on these differences throughout their careers.

Altman said the corporation clarified that they were working to fix the problem. No of the business arrangement, the statement stressed, producers and owners of material must reap the rewards of technological advancements. There was a mention of ongoing consultations with artists and content owners to learn what they like. While acknowledging the significance of content owners having control over how their information and likenesses are used, Altman argued that considerable financial rewards should be granted to individuals by exploiting this new technology. A common theme was that the corporation cared about doing the right thing for its constituents.

Sam Altman and AI regulation

News and AI overtaking

According to Narrative Science, whose software Quill transforms data into tales, a bot could be writing 90% of all news by 2025.

The Washington Post, AP, BBC, Reuters, Bloomberg, NYT, WSJ, T&S, UK's Times & Sunday Times, Japan's NHK, Finland's STT, and many other major news organizations are all using or experimenting with AI. In 2017, China's Xinhua News Agency simulated a male news anchor powered by artificial intelligence in CGI. A female artificial intelligence news anchor debuted this year.

Altman said he hoped the tools his team was creating would help strengthen media outlets. The significance of a strong national media was stressed. As Altman put it, "Let's call it: round one of the internet has not been great for that..." the early days of the Internet had many flaws. His words reflected his awareness of the difficulties and constraints that news organizations in the digital age face and his desire to find novel approaches to resolving these problems.

"But do you understand that this could be exponentially worse in terms of local news coverage if they are not compensated?" Senator Amy Klobuchar asks Altman. Because they require payment rather than theft of their work.

Altman stressed his company's full engagement in solving the problem and underlined their active participation. He reemphasized that those who make and own content should gain from technological progress. 

Sam Altman at Senate hearing

At the end of the day, AI needs to be regulated

Senator Dick Durbin began his remarks by remarking on the unprecedented level of collaboration between government and business. When did representatives of significant firms or private sector organizations last beg us to regulate them? Even if Altman and Montgomery's apparent pleading for regulation was exaggerated, their demonstrated openness to government supervision was not.

That was not just some trite expression. According to Altman, there needs to be a whole new framework for content made by generative AI because Section 230 does not apply to it, and the corporations that provide it should be held responsible.

While this may be seen as a triumph for democratic checks and balances, it also highlighted the danger posed by AI and the urgent need for firms like OpenAI to shield themselves from legal responsibility.

Marcus said a new federal agency on the Food and Drug Administration scale could be needed to implement such sweeping regulations. To solve this problem, the United States needs a cabinet-level agency. "I think we need a lot of technical expertise, I think we need a lot of coordination of these efforts," she continued, "because the number of risks is large and the amount of information to keep up with is so great."

Licenses for generative AI were also proposed, similar to those required for nuclear power plants.

For example, "the United States government might consider requiring a mix of licensing and testing to create and release AI models with capabilities above a threshold." 

Altman outlined several other ways private businesses and government agencies can work together. These include "ensuring that the most powerful AI models adhere to a set of safety requirements; facilitating processes to develop and update safety measures; and examining opportunities for global coordination." 

Chris Wylie, the whistleblower from Cambridge Analytica, has already emphasized the lack of safety assessments and ethical norms for digital businesses. Since ChatGPT, like the original Facebook and Twitter, was released without such specifications, Altman implicitly signals a willingness to compile them. 

Altman then went on to detail the session's action items. "Number one," he recommended, "is the formation of a new agency responsible for licensing any AI efforts that exceed a certain scale of capabilities." This regulatory body could withdraw permits and demand strict adherence to safety regulations.

Altman suggested a set of safety criteria be devised, emphasizing assessing the potentially harmful capabilities outlined in the third hypothesis. One way to judge the quality of a model is to see if it can reproduce itself.

And finally," Altman continued, "independent audits should be required. Company or regulatory agency evaluations should not be the only basis for these audits. Experts outside the organization should ensure that the model meets all the necessary safety standards and performance metrics for every activity or inquiry.

Altman's proposed actions aimed to increase safety and accountability throughout the creation and release of AI models by guaranteeing adherence to stringent standards and involving external scrutiny during the validation process.

We are working with many enterprises from different corners of the world. If you are concerned about data policy, security, and safety, check out other articles here. 

Book an AI consultation

Looking to build AI solutions? Let's chat.

Schedule your consultation today - this not a sales call, feel free to come prepared with your technical queries.

You'll be meeting Rohan Sawant, the Founder.
 Company
Book a Call

Let us help you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Behind the Blog 👀
Moumita Roy
Writer

Moumita Roy is a Content writer & has been working in SaaS and AI industry for the last 3 years. When she is not at her computer, you can find her hiking in the mountains, binge-reading fiction, or cuddling up with her fur baby. She shares her content marketing journey on LinkedIn.

Thanks to this project, I learned so much about the bts of OpenAI and Sam Altman!
Editor