Back to Blog

Who Owns Data? Rethinking Value in a Decentralized World

DePin Ecosystem
March 26, 2025

The Paradox of Free Data

In 2006, Clive Humby coined the phrase “Data is the new oil,” to emphasize that data, like oil, must be refined and analyzed to be valuable. However, many businesses have since fallen into the trap of believing raw data holds inherent worth. In reality, much of it is messy, biased, or even incorrect, making data provenance and refinement essential. The rise of AI, particularly tools like ChatGPT, has further complicated the data landscape, raising concerns about accuracy, security, and bias. This is because AI models rely on massive datasets, which can include misinformation, outdated facts, or biased sources, leading to unreliable outputs if not properly validated. Additionally, security risks arise as AI systems process sensitive data, raising questions about privacy and intellectual property. Since AI-generated insights often reflect the biases present in their training data, they can reinforce systemic inaccuracies.

Data isn’t just a resource to extract but a currency that consumers can actively invest in—something individuals can intentionally share in exchange for value, such as personalized services, recommendations, or better user experiences. Just like money, data has inherent worth, and consumers "spend" it when they engage with platforms, allowing businesses to refine their products and make data-driven decisions.

Instead of having it the consumer (who actually generates the data) way, the Big Techs rule the data game. Open access to data drives innovation, research, and economic growth, yet most valuable datasets sit behind corporate paywalls or walled gardens. The big question: If data should be freely available, who funds its collection, validation, and distribution? Also, what are the flaws in their various data administration models? Let’s examine the possibilities.

Data Storage

Modern data storage has shifted from controlled on-premise environments to a decentralized mix of cloud platforms and SaaS applications. Unlike traditional systems where data remained within enterprise networks, information is now scattered across multiple external platforms, making storage highly fragmented.

With businesses using an average of 80 SaaS applications (as of 2023), data exists in multiple locations, often duplicated across different systems. This widespread distribution creates challenges in tracking, managing, and retrieving critical information. Many databases also lack efficient archival or deletion capabilities, complicating long-term data management.

A clearer understanding of storage infrastructure is essential for maintaining accessibility, security, and compliance. As data continues to spread across various platforms, organizations must prioritize visibility and control to navigate the complexities of modern storage systems effectively.

Accessibility

The internet has become a vast surveillance network where tech companies collect and store user data, making it accessible to the U.S. government through the Foreign Intelligence Surveillance Act (FISA). Every action online—searches, messages, and location tracking—is logged and stored in massive data centers or cloud platforms, controlled by companies like Meta, Google, Apple, and Amazon.

Government agencies use this data for surveillance. With last year’s FISA update, law enforcement can now demand access to data stored on business servers, further expanding government reach.

Privacy is increasingly difficult to maintain, as most online activity is stored indefinitely. End-to-end encryption can provide some protection, but data backups, Wi-Fi providers, and secondary platforms still create vulnerabilities. Ultimately, safeguarding privacy requires deliberate choices, as surveillance has become the default state of the internet.

Distribution

Tech giants handle massive amounts of data using advanced storage and processing systems. They store data across multiple locations to keep it safe and easily accessible, using platforms like Google Cloud Storage and Amazon S3. To process this data quickly, they rely on powerful computing tools that break tasks into smaller pieces and run them simultaneously.

Live data, such as user interactions, is processed in real time using specialized tools, ensuring instant responses. To speed up access, they use smart indexing, caching, and AI-powered predictions. Networks and content delivery systems further reduce delays, while strict security measures, like encryption and access controls, protect user data.

All of this happens behind the scenes, with users unaware of the complex systems running in the background.

Why the Data Model is Broken

  • Infrastructure isn’t free: Storing, updating, and verifying data requires bandwidth, servers, and computational resources.
  • Incentives are misaligned: Users generate data but don’t benefit from its monetization.
  • AI, research, and finance depend on real-time data: Without sustainable funding, free data sources become outdated or unreliable.

Can Decentralization Solve This?

  • Blockchain and DePIN (Decentralized Physical Infrastructure Networks) offer a new model: Instead of corporate monopolies, data can be sourced and validated by a decentralized network of contributors.
  • Teneo’s approach: Community-run nodes process and validate data, earning rewards in return.
  • A more equitable system: Rather than a handful of companies and the government profiting, contributors are fairly compensated for their role in data infrastructure.

The Future of Data: A Sustainable Model?

  • Aligning incentives: A decentralized network rewards participants, ensuring continuous and accurate data flows.
  • Data integrity through decentralization: With multiple validators, the risk of manipulated or low-quality data is reduced.
  • Long-term vision: Can we create an economy where data remains open while those who maintain it are incentivized fairly?

The dream of open data is only possible if we rethink who funds and maintains it. Traditional models rely on monopolies, subscriptions, or government support—but decentralization offers a new path. At Teneo, we create sustainable, community-driven data ecosystems, ensuring fair compensation while keeping data accessible. The future of data isn’t just about being open—it’s about being fair, transparent, and community-powered.