April 17, 2024

Solid State Lighting Design

Find latest world news and headlines today based on politics, crime, entertainment, sports, lifestyle, technology and many more

Big Tech is already warning us about AI privacy problems

Big Tech is already warning us about AI privacy problems

So Apple restricted OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal reports. ChatGPT has been on the block list for months, bloomberg‘s Mark Gorman adds.

Not only Apple, but also Samsung And Verizon In the world of technology and Who is from the banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan). This is due to the possibility of confidential data escaping; In any case, ChatGPT’s This Privacy Policy expressly states Your prompts can be used to train their models unless you opt out. The fear of leaks is unfounded: In March, a bug occurred in ChatGPT Revealed data from other users.

Is there a world where Disney is willing to let Marvel spoilers leak?

I tend to think of this ban as a very loud warning shot.

One obvious use of this technology is in customer service, as companies try to reduce costs. But for customer service to work, customers have to give up their details — sometimes private, sometimes sensitive. How do companies plan to secure their customer service bots?

This is not just a customer service problem. Let’s say Disney decides to allow artificial intelligence – in place of the visual effects departments Writing her own Marvel movies. Is there a world where Disney is willing to let Marvel spoilers leak?

one of the things Generally It’s true in terms of the tech industry that companies are in the early stages — like the younger iteration Facebook, for example – don’t pay much attention to data security. In this case, it makes sense to limit exposure to sensitive substances, such as OpenAI itself suggests that you do this. (“Please do not share any sensitive information in your conversations.”) This is not an AI problem.

See also  Will falling stocks end the era of "porn" profits made by big tech companies? | technology

These big, savvy companies that focus on secrecy are probably paranoid

But I am curious as to whether there are intrinsic problems with AI chatbots. One of the expenses that comes with doing AI is computing. It’s expensive to set up your own data center, but using cloud computing means that your queries are processed on a remote server, where you’re essentially relying on someone else to secure your data. You can see why banks are afraid here – financial data is very sensitive.

On top of unintentional public leaks, there is also the possibility of intentional corporate espionage. At first glance, this seems like a problem in the tech industry — after all, theft of trade secrets is one of the risks here. But the big tech companies have moved into streaming, so I wonder if that’s not a problem also for the creative end of things.

There is always a difference between privacy and usefulness when it comes to tech products. In many cases – for example, on Google and Facebook – users have exchanged their privacy for free products. Google Bard explained that queries would be used To “improve and develop Google’s products, services, and machine learning technologies.”

It is possible that these big, shrewd, and focused on secrecy companies are just paranoid and nothing to worry about. But let’s say they are right. If so, I can think of some possibilities for the future of intelligent chatbots. The first is that the AI ​​wave is just like the metaverse wave: uninitiated. The second is that AI companies are under pressure to reform and clearly define security practices. The third is that every company that wants to use AI has to build its own model or, at the very least, run its own processing processes, which seem expensive and hard to scale. And the fourth is the online privacy nightmare, where your airline (or debt collectorsor pharmacy or whatever) leaks your data on a regular basis.

See also  Dow futures: Market rally reaches major test; Here's what to do now

I don’t know how it will end. But if the most security-obsessed companies are shutting down the use of AI, there may be good reason for the rest of us to do so, too.