AI and You: Microsoft’s Copilot Moves, NYT-OpenAI Debate Fair Use, GPT Store Opens

AI and You: Microsoft's Copilot Moves, NYT-OpenAI Debate Fair Use, GPT Store Opens

Get up to speed on the rapidly evolving world of AI with our roundup of the week’s developments.

We’re just half a month into the new year, but predictions that 2024 will be remembered as boom times for generative AI seem to be coming true already.

Microsoft got things rolling on Jan. 4 by announcing the biggest change to its keyboard design in nearly 30 years, adding a button that will give people direct access to its AI Copilot tool on new Windows 11 computers starting this month. Makes sense given that Microsoft has invested $13 billion in OpenAI, the maker of ChatGPT and the large language model that powers the Copilot service. 

CNET’s Sareena Dayaram called the new keyboard button “a bold bid for AI dominance,” explaining how it will serve “as a physical portal to its Copilot service, which helps people perform tasks like summarizing documents, recommending music and answering questions you might ask a search engine or AI chatbot.” 

For its part, Microsoft said its goal is to make gen AI a part of everyday life, which doesn’t seem that far-fetched given that Windows is the most popular computer operating system and there are over 1 billion people using Windows today. On Jan. 15, the company announced new subscriptions services for Copilot, which Microsoft says has been part of more than 5 billion chat and created over 5 billion images so far. The consumer Copilot Pro is $20 a month (same pricing as ChatGPT Plus.)

“AI will be seamlessly woven into Windows from the system, to the silicon, to the hardware,” Yusuf Mehdi, Microsoft’s consumer marketing chief, wrote in a post announcing the Copilot key. “This will not only simplify people’s computing experience but also amplify it, making 2024 the year of the AI PC.”

It’s not just PCs that are getting an AI boost. Last week at CES, the world’s largest consumer electronics show, companies including Volkswagen, Intel, McDonald’s, L’Oreal and LG showcased AI-branded products and services. (You can find CNET’s complete coverage of CES here.) According to the Consumer Technology Association, which runs CES, over 230 million smartphones and PCs sold in the US this year will “tap the powers of generative AI” in some way.

“You don’t want to show up at the costume party in plain clothes, right?” Dipanjan Chatterjee, a principal analyst at Forrester, told CNET about the AI tagline being added to what seemed like every gadget and new service at CES. “Everyone’s going to be there saying AI. You’re probably going to look like a fool if you don’t.”

One of the more interesting AI announcements out of CES was Volkswagen’s news that it’s adding gen AI tech, including ChatGPT, to some of its car models in North America and Europe so you can talk to your car (visions of Knight Rider, anyone?). To be delivered to new and existing cars on the road through an over-the-air software update starting in the second quarter of 2024, the AI software will expand the capabilities of Volkswagen’s IDA voice assistant beyond handling simple tasks, like initiating a call, to automatically turning up the heat after you ask IDA “to warm up the driver’s side.” And it will be able answer thousands of questions beyond giving you driving directions and destination info — including all kinds of advice, including how to rekindle your love life, notes CNET’s Stephen Shankland.  

Here are the other doings in AI worth your attention.

Mark Zuckerberg makes the pitch for open-source AI models.

Meta CEO Mark Zuckerberg shared thoughts on his company’s investment in AI and why he thinks other companies should also open source their tech as Meta did with its LLaMA large language model with tech insider site The Verge. The conversation centered on building an artificial general intelligence, a system capable of handling any task that a human can do — and possibly doing those tasks better. That’s different from generative AI (see definitions below.)

On defining AGI: “I don’t have a one-sentence, pithy definition. You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition,” Zuckerberg said, adding, “I’m not actually that sure that some specific threshold will feel that profound.” 

On the competition for AI talent: “We’re used to there being pretty intense talent wars. But there are different dynamics here with multiple companies going for the same profile, [and] a lot of VCs and folks throwing money at different projects, making it easy for people to start different things externally.”

On who controls AI and the need to make AGI models, like Meta’s Llama, available as open source: “I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated. Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”

On industry players eschewing open source and now calling for AI regulation: “There were all these companies that used to be open, used to publish all their work and used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it,'” Zuckerberg said. 

“The biggest companies that started off with the biggest leads are also, in a lot of cases, the ones calling the most for saying you need to put in place all these guardrails on how everyone else builds AI. I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy.”

How AI is changing how we ask our questions about our health.

Raise your hand if you’ve ever turned to Google to diagnose a medical issue. With AI, expect even more of us to turn to ChatGPT and other tools to get answers to our health questions.

CNET’s Jessica Rendall explains that AI is changing the way we are investigating our health — for better and for worse. The way ChatGPT “can quickly synthesize information and personalize results raises the precedent set by “Dr. Google,” the researcher’s term describing the act of people looking up their symptoms online before they see a doctor. More often we call it “self-diagnosing,” she reports.

For people with chronic and sometimes mysterious health conditions that have left them with no good answers after numerous attempts to get a diagnosis, AI may be a game changer — analyzing a list of symptoms to suggest possible causes. 

But there are a few concerns, the biggest of which is that AI’s can hallucinate, or give you information that sounds true but actually isn’t true. Another concern is “the possibility you could develop “cyberchondria,” or anxiety over finding information that’s not helpful, for instance diagnosing yourself with a brain tumor when your head pain is more likely from dehydration or a cluster headache,” Rendall said.

Still, ChatGPT can be helpful in translating medical jargon into simple English so patients can have more meaningful interactions with their doctors. Adds Rendall, “Arguably the best way to use ChatGPT as a ‘regular person’ without a medical degree or training is to make it help you find the right questions to ask.”

Why you should get on the chatbot bandwagon sooner rather than later.

If you’ve read this far and are still unsure what the gen AI fuss is all about, don’t worry — I got you. Despite all the noise around AI, most Americans (82%) haven’t even tried ChatGPT, and over half say they’re more concerned than excited by the increased use of AI in their daily life, according to the Pew Research Center.

Still, chatbots are literally changing the conversation around the future of work, education and how we may go about day-to-day tasks. So becoming comfortable with chatbots should be on your 2024 to-do list. 

To help with that, I wrote an expansive, consumer-friendly overview of chatbots as January’s cover story for CNET. And I included practical tips about how to start working with tools like ChatGPT and beyond, talked to experts about which jobs will and won’t be affected by the gen AI tsunami (TL;DR: pretty much everything), about the issues that you need to be aware of when working with these tools — including privacy, security and copyright — and about the use cases, ethical use cases that is, that we all should be experimenting with as soon as possible.

I encourage you to read it if you want to know what I’ve learned after a year looking into all things gen AI. In the meantime, here are few takeaways: 

Natural language: The new generation of chatbots — including ChatGPTGoogle BardMicrosoft BingCharacter.ai and Claude.ai — are based on a large language model, or LLM, a type of AI neural network that uses deep learning (it tries to simulate the human brain) to work with an enormous set of data to perform a variety of natural language processing tasks. What does that mean? They can understand, summarize, predict and generate new content in a way that’s easily accessible to everyone. Instead of needing to know programming code to speak to a gen AI chatbot, you can ask questions (known as “prompts” in AI lingo) using plain English. 

Gen AI is a general purpose technology: Generative AI’s ability to have that natural language collaboration with humans puts it in a special class of technology — what researchers and economists call a general-purpose technology. That is, something that “can affect an entire economy, usually at a national or global level,” Wikipedia explains. “GPTs have the potential to drastically alter societies through their impact on pre-existing economic and social structures.” Other such GPTs include electricity, the steam engine and the internet — things that become fundamental to society because they can affect the quality of life for everyone. (That GPT is different, by the way, from the one in ChatGPT, which stands for “generative pretrained transformer.”)

Mass market phenomenon: If hitting a million users is a key milestone for turning an untested tech service into a mainstream destination, think about this: It took Netflix three and a half years to reach 1 million users launching in 1999, Facebook 10 months and Instagram three months in 2010. ChatGPT, which debuted on Nov. 30, 2022, reached 1 million users in five days. Yep, just five days.

The AI effect on jobs: There’s been a lot of talk about the future or work and how jobs may fare due to the expected productivity and profit boost AI and automated tech should help deliver. There’s good news and bad news on the jobs front. The bad news: v-pre as many as 40% of roles could be affected by the new tech, which means reskilling, retraining and redoing job descriptions to incorporate how AI will change the nature of jobs needs to happen now. 

What should today’s — and tomorrow’s — workers do? The experts agree: Get comfortable with AI chatbots if you want to remain attractive to employers. The good news: v-pre according to Goldman Sachs, new tech has historically ushered in new kinds of jobs. In a widely cited March 2023 report, the firm noted that 60% of today’s workers are employed in occupations that didn’t exist in 1940. Still, Goldman and others, including the International Monetary Fund, said AI will lead to significant disruption in the workforce.

Among the new occupations we’re already seeing is prompt engineering. That refers to someone able to effectively “talk” to chatbots because they know how to ask questions to get a satisfying result. Prompt engineers don’t necessarily need to be technical engineers but rather people with problem-solving, critical thinking and communication skills. (Liberal arts majors — your time has come!) Job listings for prompt engineers showed salaries of $300,000 or more in 2023.

Think of it the way that Andrew McAfee, a principal research scientist at the MIT Sloan School of Management, described it to me.  “When the pocket calculator came out, a lot of people thought that their jobs were going to be in danger because they calculated for a living,” he said. “It turns out we still need a lot of analysts and engineers and scientists and accountants — people who work with numbers. If they’re not working with a calculator or by now a spreadsheet, they’re really not going to be very employable anymore.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *