OpenAI will solve its privacy problems before you
OpenAI’s Dev Day had some exciting announcements — we’ll talk more about them here soon, but Vikram was on the Infinite ML podcast this week to discuss early thoughts. Joey interviewed Vik Singh this week on the podcast and discussed building a business in generative AI.
A major argument in favor of open-source LLMs has been privacy & security. The idea is that users who don’t want to share data with OpenAI and other model providers — either for regulatory or philosophical reasons — will use open-source models instead. This is fairly impractical for two reasons. First, it’s going to be unreasonably complex and expensive, and second, OpenAI and other model providers will get there first. Let’s dive into why.
Why you won’t solve the problem
We’ve been beating the cost drum for a few weeks now, but OpenAI is incredibly cheap right now. That advantage only grew this week with their latest pricing updates. Whether that’s because they’re eating cost to capture market share or simply because they’ve reached enough scale to optimize their deployments better than anyone else remains to be seen. Personally, the rate & scale of improvements make us think these are lasting improvements, but only time will tell. Regardless, we believe they have about two orders of magnitude of headroom in terms of price before they’re on par with the current costs of running open-source models.
Not only is it expensive, it’s also incredibly difficult. The number of software engineers in the world who are comfortable deploying and scaling LLMs is incredibly small, and OpenAI is shoveling cash into the pockets of the best people.
Here’s a telling anecdote: A leading tech company — you certainly have their app on your phone — recently shared with us that they tried to solve data privacy challenges by deploying LLMs in their cloud. This company has a well-respected machine learning organization that’s spawned multiple startups. After a few months, they gave up and are instead working to sign a data privacy agreement with one of the big 3 cloud providers, so they can rely on hosted LLMs instead and offload data privacy & security concerns to the cloud provider.
This is an important data point for two reasons. First, if a team of this (very high) caliber can’t solve the problem, there aren’t many that can. Second, their ultimate solution points us in the direction privacy-sensitive LLM solutions will take.
Why LLM (and cloud) providers will
The elephant in the room when it comes to the privacy discussion is that OpenAI, Google, Anthropic, etc. all know that privacy is a concern. This isn’t news to them, and they’re working on solutions. This week, OpenAI reaffirmed its commitment to not use ChatGPT Entperise + GPT API data for future training runs and even introduced Copyright Shield to protect their customers from claims of copyright infringement.
OpenAI’s partnership with Microsoft has made Azure a natural landing spot for GPT. Azure already has an OpenAI Service, and it’s likely we’ll get even more security-sensitive deployments of GPT in the context of government and healthcare cloud services. Anthropic’s recent partnership with AWS will likely lead to a similar set up, especially given that AWS doesn’t currently have a horse in the LLM race. Finally, as we discussed above, Google is already in the process of signing data privacy agreements with major companies. This will likely set a precedent around data sharing and safe data use that others will follow.
The upshot of all this is that if you’re in the cloud and worried about data privacy, there will soon be off-the-shelf solutions that allow you to pick up cutting edge LLMs without bashing your head into the GPU allocation and autoscaling wall. Private deployments will certainly cost you (a lot!) more than multi-tenanted ones, but as with most cloud services, it will make sense for a large majority of users to buy rather than build.
With all this said, there will always be a set of use cases that need airgaps or other form of strict privacy and security — those developers will use open-source models and greatly benefit from them. Realistically, however, these are relatively niche use cases, and many of these organizations might not be using the cloud.
For everyone else, it’s fully aligned with the incentives of the LLM and cloud providers to provide users the security & privacy they need. If major banks and healthcare systems can trust the cloud, they can certainly trust the LLMs that cloud providers will host for them soon.
That doesn’t mean there aren’t good arguments in favor open-source LLMs — privacy & security simply aren’t on the list.