Cover photo

Client vs Protocol

Web1.0 —> Web2.0 —> Web3.0

Client vs. protocol, infrastructure vs. applications — both describe a phenomenon seen in all software cycles since the birth of the internet. A protocol is a set of rules for communication between network devices, facilitating data exchange over the internet or other networks, while a client is a software application that acts as the bridge between the end users with a given protocol. We've seen different experiments run around how both the protocol along with the client are owned, controlled and governed over the past 30 years of the internet. We are seeing different ways companies are approaching client <—> protocol relationship in two of the most innovative areas in tech today — Web3.0 and AI.

TLDR

The relationship between protocols (the underlying rules and infrastructure) and clients (the applications that interface with protocols) has evolved across the Web1.0, Web2.0, and Web3.0 eras. In the early days of the internet, open protocols were built with clients on top. Then, in the Web2.0 era, companies like Facebook and Twitter owned both their proprietary protocols and clients. Now, Web3.0 is aiming to return to a world of open, decentralized protocols with diverse clients built on top.

There's a strong argument for separating the protocol and client layers. When a single company controls both, they can limit access, stifle innovation, and capture most of the value created. Separating the layers allows for more permissionless innovation, client diversity, and sharing of value with users and developers.

We're seeing this dynamic play out in the nascent AI industry. Companies like OpenAI are taking a closed-source approach, owning both the protocol and client, while others are experimenting with open models. There's also some interesting work happening around interoperability between AI models and clients, similar to the multi-chain world in Web3.0.

It's an exciting time to be building.

Web1.0

In the late 1970s, 80's through the early 90's the Internet we have today was a glimmer in the eye. Hundreds of bright eyed bushy tailed developers were building open, global protocols which would democratize access to information for everyone. And they did an incredible job building the protocols to make this possible. Hundreds of protocols were built during this period in which the internet would not be possible without — all of which were open protocols in which anyone could build clients on top of. One of these people, who the internet as we know it today would not exist without the protocols and applications he built, is Tim Berners Lee. He developed the World Wide Web (or The Web for short) which is a system that utilizes a combination of protocols and standards to function. He needed to invent a few protocols to make the Web a reality:

  • HTTP (Hypertext Transfer Protocol): This is the foundation for data communication on the World Wide Web — it is an application-layer protocol that facilitates communication between web browsers and servers, following a client-server model. It defines a set of request methods (such as GET, POST, PUT, and DELETE) and status codes (like 200 OK, 404 Not Found) to enable the retrieval and manipulation of resources identified by URLs. It is also the reason why you mindlessly type in https:// before the URL you type.

  • HTML (Hypertext Markup Language): He also created HTML, the standard markup language used for creating web pages and applications. HTML allows for the structuring of information and inclusion of hyperlinks, which enable navigation between different documents on the Web.

  • URLs (Uniform Resource Locators): He also created the URL system, a standardized method to locate resources on the Web. URLs specify the location of a resource (such as a web page) on the internet.

  • The first web browser: Berners-Lee wrote the first web browser, called WorldWideWeb (later renamed Nexus), and the first web server software. This browser was the application that allowed users to view the Web, navigating through hyperlinks..

In essence, the early Web was built on HTTP for communication, HTML for document structure and presentation, and URLs for addressing — all built by Tim and his team. The World Wide Web had some new inventions, but many of the core components had already been built like the hardware themselves, different networking protocols etc. These protocols and applications laid the foundation for the Web, making it possible for people to easily publish and access information.

And that's just a few of the protocols that make up the foundation of the internet — there are dozens more. Most people know TCP/IP, (Transmission Control Protocol/Internet Protocol), but don't actually know what it does as the bedrock of the internet.

TCP/IP is a suite of communication protocols that form the foundation of the internet, enabling reliable, ordered, and error-checked delivery of data between applications running on hosts in different networks. It follows a four-layer model: the Link layer handles the physical transmission of data; the Internet layer (IP) is responsible for addressing, routing, and packet fragmentation; the Transport layer (TCP) establishes end-to-end connections, ensures reliable delivery, and controls flow and congestion; and the Application layer (HTTP, FTP, SMTP) interacts with user applications. The IP protocol uses a hierarchical addressing system (IP addresses) to identify hosts and route packets, while TCP establishes a virtual circuit between the source and destination, breaking data into segments, acknowledging received packets, and retransmitting lost ones to ensure data integrity.

Other protocols include HTTP mentioned above, and some people know SMTP (Simple Mail Transfer Protocol), IMAP (Internet Message Access Protocol), SNMP (Simple Network Management Protocol), FTP (File Transfer Protocol), DNS (Domain Name System), SSH (Secure Shell) and dozens more. Many of these protocols — all of them open sourced — have applications built on top of their protocols and serve tens of millions of users. In the case of a few, billions globally.

It's worth noting that the developers building these core internet protocols didn't profit from the foundations they built for the next generation to build clients on. They did it with the spirit of building something better for their fellow humans, to make their lives better and easier by giving them better access to information. It's unfortunate that they didn't have a mechanism to receive a small piece of the upside in these protocols they developed, or the success of the clients built on top of them. They, like all people who build products in that spirit, deserve upside. If they had gone the route commercializing these protocols in a company structure, they likely would have stifled the growth of the internet, or maybe hindered it all together. Luckily there's a technology today to make that possible for the infrastructure builders of the future — but we'll get to that later.

All of these core protocols were built in the 70s, 80s and early 90's, but things didn't take off immediately — the gestation period with the internet took time. The first killer application that reached millions of users on top of these protocols was the Mosaic Browser developed by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois. Development began in late 1992, and the browser was officially released in 1993. The project was led by Marc Andreessen and Eric Bina. Andreessen, a student at the time, and Bina, a staff member at the NCSA, worked together to create a browser that was user-friendly, capable of displaying images inline with text—an innovation at the time—and able to run on various operating systems, including Windows, Macintosh, and UNIX. Mosaic was one of the first web browsers to feature a graphical user interface (GUI), instead of a command line interface, which made the Web more accessible and user-friendly for non technical people, and because of this Mosaic was instrumental in introducing the Web to millions of users worldwide.

This same team later built the Netscape Navigator — the first commercial large scale client success on the internet. The Netscape Navigator, a few years after launch, was a public company serving tens of millions of users around the world. More importantly, it was the beginning of the .com boom, which was generally a client focused funding boom for useful products that could be built on top of these core protocols that had been developed for 20+ years. The bet was that all of the value would accrue at the client (also called application) level, and while in the short and long term they were right, the medium term involved many years of pain post .com bubble. Yet, some of the most valuable companies in the world today were founded during this period, including Google and Amazon. The dot com bubble also brought about the transition from the Web1.0 world, with open protocols and companies building clients on top of them, to the Web2.0 world, where companies closed sourced their protocols and built closed source clients on top.

Web2.0

The transition from Web1.0 to Web2.0 wasn't marked by a specific date, but rather by a gradual evolution of the Internet technologies and the types of companies being built from the late 1990s to the early 2000s. What’s clear that marked this transition is the transition from building clients on open protocols, to companies proprietary owning both the protocol and the client.

Initial Google logo from September 15, 1997 to September 27, 1997

Google is one of these rare companies that embodies both Web1.0 and Web2.0, being born in the transition phase. Google developed it's own protocol for organizing the world's information online, initially through pagerank, and also built a simple client that interfaces with their protocol that is now used by billions of people monthly.

On the other hand, their second largest product, Gmail, is a client on an open source protocol developed in Web1.0, SMTP, an open source protocol that Google clearly does not own. It had one foot in each of the eras of the web.

Other Apps in Web2.0

Facebook, Twitter, Airbnb, Uber, Instagram, TikTok, Reddit, LinkedIn and pretty much any consumer product that had success in Web2.0, built and proprietarily owned both the protocol and the client — in which nearly everything was closed sourced. While some projects contributed to open source (Airbnb with Lottie, Facebook with React, React Native, PyTorch, and GraphQL, Uber with H3 and Jaeger) most of the development was, and still is, closed for value accural purposes

Facebook developed a protocol for anyone in the world to share text (later images and videos) with their friends. They also developed a client on top of this protocol that facilitated the reading and writing of this data to users in a familiar way and provided recommendations for new interactions based off algorithms trained on user data.

Twitter (now X) developed a similar text based protocol to Facebook, but had constraints that made for differentiated narrative and content. They also developed a client that hundreds of millions of people around the world use.

Airbnb developed a protocol in which users could list their homes for strangers to sleep in, and for these users to exchange funds for that service. They had to also develop a client, a beautifully designed one for that matter, that is used by millions to interface with a catalog of places to sleep in strangers homes globally.

Uber developed a protocol in which connected drivers to people who needed rides, and facilitated the financial transaction between the two. They also built a client that ride seekers could press a button on their phone and within a few minutes someone with spare time and a car could provide transportation anywhere you want.

Instagram developed a protocol for people to share images with their friends, and built a user friendly client with a feed, algorithm and photo editing tools that made it easier for people to make their images look better.

Tik tok built a protocol that stored short form video content and built a user friendly client that made it easy for people to share that content with their friends, with an algorithm based on user data to suggest videos from strangers that they might like.

Ok maybe that was overkill, but I wanted to drill in the fact that this type of building is the Dogma of Web2.0 — but it's clearly not the only way.

Specifically differentiating between client and protocol may seem insignificant, but having a protocol closed source and controlled by a company has real implications. For example, when that protocol decides to open up API access for developers to build clients, developers need permission every API call to the protocol. At any time, the company that controls the protocol can modify what you have access to, or cut off access to it, for whatever reason.

This is best exemplified with the Facebook and Twitter APIs.

Initially, Facebook’s API provided broad access to developers, enabling a flourishing ecosystem of third-party applications that could leverage Facebook’s vast social graph. This openness was instrumental in Facebook’s rapid growth, as it allowed for innovative applications and services that drove user engagement.

Zynga, a social gaming company, built the vast majority of their company on the Facebook protocol. They were a leading developer of social games like FarmVille, and saw a significant decline in their user base and engagement levels following changes made by Facebook to its API and News Feed algorithm around 2011-2012. Facebook’s modifications limited Zynga’s ability to interact with users via social features, crucial for the organic growth and engagement of its games. These API changes, combined with adjustments to the News Feed algorithm that de-prioritized app notifications in favor of content from friends and family, drastically reduced Zynga’s visibility on the platform and decimated their business.

Twitters API followed a similar path, illustrated by the experiences of TweetDeck, Twitterrific, and dozens of other developers following Twitter’s API changes. Implemented in 2017-2018, these changes stripped away essential features such as real-time updates and the chronological viewing of tweets, which were fundamental to the user experience and functionality offered by these third-party Twitter clients. And most recently, the cost of the Twitter API has increased substantially with little notice.

Clients had no say into the changes of these protocols and had their companies severely damaged by these changes.

Where Does the Value Accrue

The other side effect of Web2.0 is the location of value accrual. Companies owning and controlling the protocol and client, makes it incredibly easy for these entities to extract value and keep it within the walls of shareholders of the company. While great for investors, founders and employees, the early users who helped build the value, like the users on Instagram who mostly unknowingly share all kinds of personal data that Meta then uses to sell to advertisers, the users who provided that data receive nothing in return besides free use of a product. YouTube does the best of the big tech companies of sharing the ad revenue brought in through creators, but none of that is shared with the users who provide the eyes for them to sell to. There’s a valid question as to “should that revenue even be shared” and I would argue yes. Even if they receive a few dollars a year. The trouble is it’s hard to do these micropayments at scale with Web2.0 systems. If only there was a technology that could enable this 🤔.

In Chris Dixon's latest book, he had a great section where he explained:

What we have today may feel like a golden age for creative people: creators can push a button and instantly publish to five billion people. They can find fans, critics, and collaborators just about anywhere on earth. But they're mostly forced to route everything through corporate networks that devour tens of billions of dollars that might otherwise have funded an immeasurably greater diversity of content. Imagine how much creativity we're missing out on because earlier attempts at decentralized social networks, while noble, like RSS, couldn't hold their own.

We can do better. The internet should be an accelerant for human creativity and authenticity, not an inhibitor. A market structure with millions of profitable niches, enabled by blockchain networks, makes this possible. With fairer revenue sharing, more users will find their true callings, and more creators will reach their true fans.

Imagine if Web2.0 had developed their protocols and clients separately. In this world, twitter would look like farcaster or lens, where twitter would have been the underlying open sourced protocol and a closed source client that interfaces with that protocol (farcaster has warpcast.com as the main client but plugs into aggregators like firefly and yup, lens has hey.xyz buttrfly.app, phaver.com and also plugs into aggregators). In this world twitter could have multiple clients with different ideals/belief sets — one could be a more libertarian ideals client, one could be a more liberal ideals client, one could be a client that strictly kicks off people who don't like pizza because it's classified as hate speech. It wouldn't matter, because the user could take their audience to a different client from the underlying protocol. And the de-platforming that ensued during the covid era wouldn't be as big of an issue if they were just kicked from a client, vs. from both the protocol and the client. On top of this separation of creators and early users would be rewarded for their early contributions and share in the success of the products they love.

Now, let's get to the good stuff — where this is actually possible.

Web3.0

The transition from Web 2.0 to Web 3.0, like the transition from Web 1.0 to Web 2.0, wasn't market at a specific date, but rather a series of technical advances, some of which we're still in the middle of. The transition started with the launch of bitcoin in 2009 and has prolonged into the 2020s, marking a paradigm shift from centralized, proprietary client-protocol architectures to decentralized, open source protocols, with rich client diversity. Succinctly, Web 3.0 aims to create a secure, and user-centric internet.

The first technological advance was the technologies Bitcoin amalgamated — like the SHA-256 hashing algorithm, Elliptic Curve Digital Signature Algorithm (ECDSA) for public <—> private key pairings and proof-of-work consensus where miners compete to solve complex mathematical puzzles by finding a specific hash value that meets a predetermined difficulty target to reach consensus on the state of the blockchain and add new blocks. All of these technologies enabled peer-to-peer transactions without the need for intermediaries, while maintaining a transparent, immutable, and tamper-proof public ledger of all transactions. It is an application specific chain optimizing for secure, decentralized, and censorship-resistant financial transactions, rather than providing a general-purpose platform for building a wide range of decentralized applications. This was the first open source protocol developed in Web3.0.

Coinmarketcap.com from 2013

This marked a mini boom in protocols forking bitcoin to launch their own application specific protocol. Litecoin, Namecoin, Peercoin and many other open source protocol launched, with the same mental model as bitcoin — an application specific chain to do one thing. Then came Ethereum.

Ethereum was the first general purpose blockchain, and it took the world by storm. It's a Turing-complete platform for creating and executing smart contracts, leverages a combination of technologies including a Proof of Stake consensus (upgraded from proof-of-work consensus), the Ethereum Virtual Machine (EVM) for running smart contract code, the Solidity programming language for writing smart contracts, with a modified version of the Merkle Patricia Tree data structure for efficiently storing and updating state information. Ethereum enables the creation of complex, self-executing contracts with predefined rules and conditions. This has enabled the development of various dApps, such as decentralized finance (DeFi) protocols, non-fungible tokens (NFTs), decentralized autonomous organizations (DAOs), and more.

Now, there are dozens of alternative general purpose Layer 1 protocols, and dozens of layer 2 protocols on top of Ethereum, all of which are open source.

Even though the promise of Web3.0 is open transparent protocols, there are still closed protocols that are crucial in the space. Coinbase, although technically a client on different crypto protocols, enables buy and sell crypto (also a protocol) through their ui (client), but it’s code base is completely closed off. Developers may at some point be able to build on top of Coinbase, but it'll be a permissioned API that will need Coinbases blessing for each call. Compare this with dYdX or Uniswap, both of which are also trading protocols, but open source. Not only can anyone build a client on top of their underlying protocol for people to trade, but the developers won’t ever have to worry about their products getting shut off or altered in a way that hurts their business like they would in a Web2.0 structure, unless approved by the community by a vote. Even in the case of a block at the protocol level by community vote, they could fork the code and run their own protocol and build their client on top of this. This is different from Coinbase, which started in 2014 during the early phases transition phase of the Web2.0 to Web3.0 era, and embodies the culture of Web2.0 from a technology perspective, but serves the Web3.0 community.

With that said, the vast majority of protocols and clients in the Web3.0 space are partially, or completely open source. Let's dive into a few.

Examples of projects with open sourced protocols and clients

Ethereum is an open-source blockchain designed to enable the development and operation of decentralized applications by enabling anyone to write any arbitrary code to it. It facilitates consensus-driven execution of smart contracts within a global network of nodes, creating a decentralized virtual machine that securely and efficiently verifies and enforces the contractual state transitions initiated by globally dispersed nodes. Uniswap, Compound, MakerDAO and tens of thousands of other clients are built on top of Ethereum.

Solana is also an open-source blockchain designed to enable the development and operation of decentralized applications functionally providing the same protocol utility as Etheruem, but with a different underlying protocol structure that makes different tradeoffs on decentralization for performance and transaction processing. Phantom, Jupiter, Wormhole and tens of thousands of other clients are built on top of Solana

IPFS, short for the InterPlanetary File System, is an open source protocol designed to create a decentralized method of storing and accessing files, websites, applications, and data. Filecoin is a client built on top of IPFS that acts as an economic layer to motivate participation and ensure reliability in data storage.

Fleek is building an open source edge network (like Vercel), and has built a client on top of their protocol to make it easy for users to deploy their websites on top of their open source protocol.

dYdX is an open source protocol for enabling decentralized deritives trading. They also built an open source client for users to interact with their protocol, though anyone could build their own client, or fork and build their own protocol.

Not all Web3.0 projects are made equally. OpenSea, for example is a closed source UI built on top of Etheruem protocol to facilitate buying, selling, transferring, viewing and showcasing of NFTs. Still an amazing product, just closed source

Where will the value accrue?

There is a ton of great writing on this topic already, so i'll just share some high level background. Before the dot com bubble, there was a thesis that the infrastructure/protocol level is where the value would accrue for the internet. This lead to companies like Nortel, an internet infrastructure company, being worth more than 1/3 of the entire Toronto stock exchange at it's peak. It ended up defunct less than a decade later, because that thesis turned out to be wrong. The value accrued at the client, or application, layer, not infrastructure. Some have speculated that Web3.0 will be the opposite — a fat protocol thesis — though it is still very much up in the air.

I think the bigger question is what will be valuable long term in Web3.0? In Web2.0, it wasn’t obvious that the valuable thing would be data, therefore advertising, which was collected at the client level. Maybe what will be valuable long term in Web3.0 isn’t as obvious as the predictions today suggest, and it will actually be something some project finds out in the years to come.

In our current state, most of the top 100 tokens are L1s or L2’s (protocols), with just a few clients (uniswap, stablecoins, Maker etc.). The issue with the fat protocol thesis, is that at some point, the narrative alone won't be enough to drive speculation, and usage of these protocols will become necessary for them to be valuable. The issue with this for general purpose blockchains, is that any successful applications are incentivized to launch their own chain. Why wouldn't you launch a chain that drives millions of dollars a year back into your ecosystem instead of paying rent to whichever chain they're built on?

The end state of every successful client is a protocol

You may think that this is the same as Web1.0—>Web2.0 transition, but it doesn't seem to be. The main difference is that the underlying protocol remains open source and decentralized, while the main client is open or closed source. The obvious benefits are that it's much harder for a single entity to control or attack the protocol, and that the projects building on top have more influence into any changes to the underlying protocol, like the changes to Facebook or Twitter API's.

Any client that launches on a general purpose chain, that turns itself into a protocol by launching their own chain will turn a tax on the ecosystem (gas fees) that leaves their ecosystem, into revenue that stays within the ecosystem. dYdX is just starting to see the benefits of launching their app chain on Cosmos. Since the launch of Base in July of 2023, Coinbase has already earned more than 50m in revenue from transactions. Imagine these networks in 10 years, and the value that will be captured and shared with participants in these protocols.

Where are the users

Crypto hasn't yet had its Netscape Navigator moment yet. Or the AI version would be the "ChatGPT moment". Where there's an application that get's tens of millions of users. And while the market may reflect otherwise in prices, the usage numbers support this theory. Netscape was estimated to have tens of millions of active users by 1995. There were 20+ million users on the internet, and Netscape had 80%+ market share just a year after it launched. Two years later, there were 70 million users on the internet, with netscape well over 60% market share. While in crypto, Solana is just above 1m daily active wallets and Ethereum is under 600k. Monthly active wallets might be a more comparable metric to "users on the internet", but even those numbers, at 20m on Solana and 17m on Ethereum are a combined lower than the 1997 internet numbers. Hopefully this is an auspicious sign that many of the good things are yet to come.

Project Spaces that need more love in crypto

Don't get me wrong, 10x leverage on meme token is entertaining, and probably provides value to people, but besides making the founders/team/investors/early users rich, it doesn't provide that much value for the everyday person. I would love to see the true promise of Web3.0 lived out, and in order for that to happen, we need more builders and more funding into things that are actually useful to people outside just speculation.

Awesome attempts at this:

Farcaster/Lens/DeSo — decentralized social

Drakula — onchain tiktok

Fleek — decentralized hosting

3DNS — onchain domains

DIMO — monetizing car data to share with users

ENS — onchain identity

Inference/GPU — Rendr/Kuzco

I'm sure there's countless others, but you get the idea. Speculation is an amazing tool that drives progress, but only focusing on it can be harmful. Speculating to achieve something that's useful is really what birthed silicon valley, and is something the crypto space could do a better job at.

Let's experiment!

If you dig deep enough, it seems everything that we do is building on the shoulders of the people that came before us — putting pieces together in ways they hadn't been before. Bitcoin was something seemingly completely new, but it really just put existing technology and protocols together in a way that hadn't been thought of before. Ethereum was also something seemingly completely new, but it also took existing ideas and technology, and put it together in a way that hadn't been though of before. It seems like innovation more like a fibbinoci sequence, or a flower of life, continuously extending itself vs sudden, isolated 0 to 1 creations.

With that in mind, could we extend the ideas of existing Web2.0 protocols and combine it with Web3.0 technologies to create something that's better than what currently exists? Experiments are being run in finance, why not in other areas?

Onchain Uber protocol — A protocol for facilitating the connections of drivers and riders through smart contracts. With the advance of autonomous vehicles, there could be a protocol built out to facilitate driverless cars with onchain payments and rewards to early suppliers of cars in the network and also users who ride.

Onchain Airline protocol — Airlines in the US are really bad. What if there was an airline that funded itself constitution DAO style, bought used airplanes, all company functions were onchain, all early users, employees and workers would be rewarded with tokens, not the existing miles reward system where the minting mechanism is as opaque as the redeeming mechanism. And an open protocol to exist to facilitate all bookings where multiple clients could be built on top. Someone should try!

Onchain Classpass protocol — a small-ish market, but the reason classpass has failed, is because they treated their gym partners poorly while updating their protocol many times after the gyms agreed to join. With a permissionless protocol, the gyms would have more of a say into the underlying protocol, and if they were properly incentivized, would do what's in the best interest of the collective.

I could name 100, but we need more experiments for real things in crypto.

The Future is Open Protocols

The future of the internet is open protocols. With a mechanism to distribute value to the early contributors of the protocol outside of a company structure for the first time, there are so many improvements to the existing infrastructure that could be built. The future of these protocols have unlimited potential. At the base lies a tool that we could use for a lot of good, and it's up to us to build it. It is so early, and most of the world hasn't interacted with the protocols and applications or even been onchain. Drown out all the noise, and focus on building something people love.

AI

Clients and protocols are also at play in the world of AI. AI has seen a massive bull run, especially in Silicon Valley, during the crypto bear market with the launch of OpenAI's ChatGPT. OpenAI did something that no one has done before. They build the underlying protocol (GPT4) AND built a successful application on top of it (ChatGPT) that has reached 100M users faster than any product in history. That would be like if Vitalik/EF built Ethereum (protocol) and also built a killer application like Uniswap that reached a hundred million users. There aren't even 100m monthly active wallets onchain yet.

Another advancement of AI is that it's the first modality in history in which the interactions between the client and the protocol are in english, not solely code. Many of the applications that people are building today — whether from consumer applications to dev tooling — rely on prompt engineering and the chaining of prompts that interact with the protocol to produce the end product for the user. This makes it incredibly easy for anyone, not just developers, to build products that people love, and I only see this benefiting all builders, crypto or elsewhere, as they combine natural language protocols with creative prompt engineering or different unique datasets to create differentiated products.

Clients in AI take many form factors, and many of the cool ones we’re seeing are at the edges of personal information where the Google crawler isn’t able to reach. That’s your emails (Ultramail), your personal and professional conversations (rabbit/tab/rewind), and the world around us (humane).

Competing Models

The Web1.0 and Web2.0 models are competing heavily in AI. OpenAI, Google and other tech giants are taking a closed source model more resembling Web2.0. Meta, Mistral, X, are following more Web1.0 approach. Gensyn, Together and others are experimenting around a more decentralized Web3.0 approach to building these models.

Very far from seeing who the winner will be — it's too early to tell — though closed source seems to have a 12 month + advantage over open source, based on what's publicly release. There is a massive information war going on in AI though between the perception of how publicly far along companies are vs where they actually are — the stakes couldn't be higher.

On top of this, there is a massive incentive in AI to keep your protocol closed source. If you have an advantage in intelligence, you can use that to further increase your intelligence. This concept seemed to be a driving force behind Google's growth in the early days, where they had an advantage in their product, and began shipping so fast that no-one else could catch up.

Clients utilizing multiple protocols

One unique overlap that's happening in AI and Web3.0 is the switching at the client level to different protocols. This has been most prevalent with the launch of Claude Opus/Sonnet, with people integrating both Opus and GPT-4. This is because the more intelligent the model (protocol), the better the client's product is. Some models are better than others at certain tasks, like coding, but as long as the models are integrated into the client, users can decided for themselves which model is best for their task. This is similar in crypto, where clients, like Uniswap, support dozens of different chains (protocols).

Cursor integrating their own model

Where will the value accrue?

It seems like the protocol is where the value is accruing in AI. It's where 70%+ of where of the funding has been deployed in the AI space. Any advantage in the client have been used to improve the protocol. Clients like OpenAI's ChatGPT and Anthropics Claude directly help the underlying protocol through RLHF. It's hard to think of applications that could be successful, without some edge in data, ux etc. that innovates fast enough so the competitors can't catch up. Obviously, it remains to be seen.

Final thoughts

There are tradeoffs for everything. Prioritizing open protocols comes with it's own challenges, like navigating decentralized governance or incentivizaiton — it's not always the optimists dreams of what the technology could be. Also, no one knows what the future is going to look like. But looking at the experiments we've done in the past can help guide us on our journey.

Whether you're building a company that's just a client, a closed protocol/client, or an open protocol/client, it's important to know the foundation for which you build. The exploration of the client versus protocol throughout the eras of the internet, helped me understand where we're at, and provided a new framework for evaluating ideas. I hope that you found it useful!

Loading...
highlight
Collect this post to permanently own it.
Subscribe to 🌅🍵🍣 and never miss a post.
#ai#web3.0#web2.0#web1.0