Studying the history of life is an important venture. It’s how we understand why certain characteristics exist in living organisms, and it can also be used to explain the importance of biological events that are happening today.
A study on the population density of the Tyrannosaurus Rex, one of the world’s most famous predators, was first published on Science and reported on the National Geographic. It’s a huge claim, with researchers estimating that a total of 2.5 billion T.rex have lived in North America, the native region of the species, going out as far north as Alaska and as far south as Mexico across a time span of two to three million years.
This estimation is a huge claim and has certainly caught the eye of paleontology enthusiasts, However, there are a wide variety of variables that can compromise the validity of the information being tested: the location where the bones are found; shifts caused by glacial patterns and tracks throughout the years; inconsistencies with carbon-14 dating, which provides an approximate age; and even human intervention, which may not be enough to fill in the gaps in information that we do not know nor have the tools to understand just yet.
If data-driven ventures cannot be used to sample what we know to be true, then is it still worth it? Are approximations a step in the right direction or is it too rooted in theory to be useful?
The pursuit of estimates often discounts the importance of absolutes. In paleontology, there are plenty of assumptions made that may affect the results of their research. As Alexander mused, much remains unsaid about the foundations of the study—and it may have an impact on whether or not scientists are taking the right perspective on the matter.
Analyzing data from the source and having a clear log of how the researchers conducted their tests is standard procedure. However, what is the impact of creating logs for circumstances that can no longer be observed by anyone living?
“Who decided that the dinosaur is a dinosaur or not a dinosaur? Who decides that it sits in this area of time as opposed to another? What if my carbon dating is wrong, and maybe this aquatic animal that we didn't think existed prehistorically actually did exist?” Alexander asked, expressing doubts.
This is the second time that scientists have made an attempt to estimate the population density of T-rexes in the past, and results closely resemble an earlier estimate that was published in 1993. The difference between these two papers is that this most recent study utilizes the latest in T-rex biology research to set upper and lower limits on the total population—one approximation after another.
Since there is so much inexactness and uncertainty in what we do, it is important to focus on the fundamentals: ideas, principles, and beliefs that we know to be observable, objective, and tangible. When we go overboard on theory, we may find ourselves defining a biased picture of what the data represents.
This concern is not just limited to research and development in paleontology. With the vast variety of tools, knowledge, and technology that we have at our disposal today, it can become all too easy to take the wrong direction. When we take the next step forward, we need to make sure that our feet are planted firmly on the ground.
At the pace that science and technology is developing today, it’s safe to assume that more discoveries will be made—not just in paleontology, but in other sciences and across other industries as well. It is vital that scientists continue working towards making these discoveries more accessible to the public while staying true to the path of innovation.
There is a different impact in analyzing tangible beings, objects, and events. TARTLE is an opportunity to look at the T-rexes of the modern world: clear and imminent threats that are capable of harming us and the people we care about. The TARTLE platform is an opportunity to connect with like-minded individuals and organizations so that we can work as a collective to preserve our earth and our economy for future organizations.
Despite significant technological and scientific progress in the study of physics, time, and space, it looks like we still have a long way to go before we ever truly understand the impact of what we are looking for.
When exploring the origins of the universe and the nature of everything we see and know, in what ways have we exhausted our pursuit for scientific inquiry? How can we improve the way we study such an important part of our existence? Is it possible to become too data-driven in our search for meaning?
Jason suggests that we continue to fall short of understanding the universe because we’ve never pursued a proper relationship with the subject matter. This could be the case; plenty of social studies call for the researchers to immerse themselves in the communities they study. Since we acknowledge the universe as a dynamic, living, and breathing entity, this could be a new take to an ages-old problem.
According to a book entitled Antifragile by Nassim Nicholas Taleb, the Green Lumber Fallacy points to one’s incapacity to truly understand the implications of what they know and how to use it. It is rooted in the idea that while we may be focused on the right issues, we may not yet be capable enough to fully comprehend its complexity.
Indeed, our desire for knowledge is going in the right direction. But are we looking at it in the right perspective? Alexander points out that while 99 percent of our universe is composed of material we can’t see, “we’re looking at one percent, maybe less—and think we’re bad-ass, and we have the answer for all of it.”
“That's like, I have the ocean on earth. I've taken out one droplet of water. One. I'm going to study it and say all of the fundamental rules of the universe and everything sits right here in this one drop, because I can see it.” he continued.
The author of Antifragile, further, explores the concept of antifragility: things that are not just resilient to disorder, but are dependent on it for growth and development.
Parallels can be drawn between antifragility and the scientific method. This is because the constant search for knowledge requires that researchers are always open to the possibility of having their hypotheses disproven. With such a massive universe left to comprehend and explore, it would be a step backward for humanity to assume that we already have all the tools, equipment, and mindset required to uncover the truth.
It’s on us to continuously question the methodologies we’ve set for ourselves. Are we maximizing our progress when we take the conservative approach? Do we still give ourselves room for creativity?
Beyond exploring the big cosmic question, modern advancement has taken an aggressive view and approach to nature. Our thirst for development has led us to create sprawling urban jungles that have taken over large swathes of lush greenery. We’ve replaced rivers, forests, and habitats with rock-hard concrete and gas-belching machinery.
It’s time to be more discerning of what we leave behind when we reach for the stars.
TARTLE goes beyond the surface to bring two human parties together. It’s a platform that gives people the opportunity to support experiences they may have never been exposed to otherwise.
The benefit is twofold: the first is in the transfer of skills and knowledge between communities who become invested in a common cause. The second is the capacity for these causes to look for alternative sources for funding, from people and entities that they would never have been able to reach without the platform.
Antifragility is a constant test of our character, especially when we’re exposed to lived realities that are so different from ours. However, it’s also an important part of the authentic human experience. Underneath the chaos of sharing this world with 7.6 billion other people are simple hopes, dreams, and aspirations—a chance to find common ground and empowerment in our common humanity.
Forks in the Road
Entertainment is full of examples of technology gone wrong. Every dystopian sci-fi movie makes use of this to some degree. Either technology runs amok and enslaves humanity as in The Terminator or The Matrix, or we become so enamored of a technology we enslave ourselves to it as in Gattaca. In still others, technology becomes a tool that is used to suppress humanity, most famously in the novels 1984, Brave New World and Fahrenheit 451. And if we are honest, we can look to all of these examples and see parallels with technological development today.
That’s because there is no such thing as a free lunch. Everything comes with some sort of trade off or a dark side. It will always be possible to take an objective good and pervert it to something destructive. The very real life development of nuclear power is a poignant example. Nuclear power, even the old school, brute force fission reactors that are still the most common produce tens of thousands of megawatts of electricity every hour. And they do this with no carbon emissions on the production end. The only thing stopping them from producing more is their relatively small number, with fewer than a hundred operating in the United States.
However, with all that promise comes the proverbial dark side, which Hiroshima and Nagasaki experienced first-hand in 1945. While none have been used in war since then, the threat has loomed over the world like the Sword of Damocles. Trillions have been spent developing ever more powerful nuclear bombs and methods to deliver them. Trillions that could have been spent researching fusion reactors, an even more powerful energy source with a fraction of the radioactive waste of fission. Instead fusion research led to the hydrogen bomb, a type of nuke that makes Fat Man and Little Boy look like glorified fire crackers.
We stand at a similar technological fork in the road today. As our knowledge of genetics and our ability to manipulate them grows, we will be faced with difficult choices on how to use this technology. The same technology that could eliminate genetic predispositions to various diseases could also lead to triggering those dispositions in others. Slowing down or eliminating aging could create a world of selfish would-be immortals actively preventing the birth and development of future generations. The same technology that creates a new vaccine could create a new virus to unleash on an unsuspecting world.
Less dramatic is the idea that companies will simply use these advancements to control whole markets in new ways. Take the situation with genetically modified crops. While GMOs have been a great help in getting food to grow in environments that have typically been hostile, allowing more to be grown for and by those in challenging environments, there has also been a cost. Some, like Monsanto, control aspects of the GMO market with an iron grip. They do this either by engineering their seeds so they won’t germinate or in the case of a product that does, they have been known to sue farmers for their “intellectual property” because the GMO seeds germinated and spread into a neighboring field. That kind of action can kill a farmer’s business. In the case of the non-germinating seeds, a farmer is then forced to buy fresh seeds every year, instead of in the old days, growing this year’s crop from the last year’s seeds. That keeps prices artificially high and also puts farmers at risk should bad weather kill enough of their crop that they can’t afford to buy the new seeds.
The point is that we have to be very careful with how we use our technology. It can often be used to destroy rather than help others. Not only that, the destructive option is usually the easier one in the short term. Just look at fusion again. We built a bomb with it decades ago but we still haven’t figured out how to make a commercially viable fusion reactor.
Just as our choices with nuclear power defined much of the world for the latter half of the twentieth century, so our choices with genetic modification will define the world for what’s left of the twenty first. We must choose, and choose wisely.
What’s your data worth?
Genentech and Priorities
It came to our attention that we tend to point out all the bad stuff going on. Whether it be talking about how social media companies are selling your data, other companies skimming it off your activity, tracking your location, or governments trying to force their way into the blockchain economy we tend to focus too much on the negatives. While that is a natural tendency of human nature (the news has a saying – if it bleeds, it leads), there is plenty of good stuff going on as well. We at TARTLE think it’s our responsibility to make sure you know at least as much about the good stuff as you do the bad.
Some of those good things are coming from a company called Genentech. This 40-year-old healthcare company exists to help better treat people suffering from some of the worst diseases around. Of course, it’s worth noting that this goal isn’t unique in itself. What is unique is that Genentech starts with helping people as the goal. They don’t spend a lot of time talking about profit. Not that they don’t make one, it’s just a natural result of them pursuing their primary goal of improving the health of people everywhere.
Another aspect of the way Genentech operates is that they don’t just work on improving health on the back end, treating people when they are sick. They also work on getting out in front of the problem by taking care of the environment around them.
How are they doing that? Genentech is actually doing quite a bit. They are being transparent about their goals in reducing their water use and greenhouse gas emissions and taking active steps to make those goals a reality. In transportation, they are working on building a complete electric vehicle fleet for their campus as well as providing service for that last leg from public transport not just for Genentech employees but for those of nearby businesses that aren’t big enough to have their own fleet. They are even helping to build out the rest of the infrastructure in the San Francisco Bay Area. In water use, they’ve managed to save 78 million gallons of water in just three years. I don’t care who you are, that’s impressive. In energy, they are working towards having all the electricity needs of its campuses supplied by clean sources. They are already well along the way and plan on getting to 100% clean electricity by 2025.
Now, all of that isn’t to say that Genentech is perfect. What are they missing? What they are missing is the right approach to data. They recently signed a multimillion-dollar deal with 23andMe to gain access to all of the genomic data on file. So, if you have done a DNA test with 23andME, Genentech is able to access and use that information without your knowledge.
Naturally, being able to access large amounts of data like that is a big asset for a healthcare company on the cutting edge of developing new treatments. Yet, they should still be getting their data ethically, not buying it from someone who shouldn’t be selling it in the first place.
This is why TARTLE is working so hard to get our name out. Many companies like Genentech would love to get the information they need from people who give their fully informed consent, yet they haven’t realized that there is a way to do exactly that. All it takes is for them to sign up in our digital marketplace as a buyer and search for exactly the kind of data they need. Not only would they be getting it from people who are willingly sharing their information, they are able to then go back to the same people for follow-ups with responses to treatments, lifestyle questions that might be relevant and much more. What’s more, they are not only getting their data ethically, they are likely to do it at less cost. When people understand what their data is going to be used for, that it will help others, they are more likely to actually donate that information. In that way, sharing data becomes ethical and charitable, and that clearly is good for everyone.
What’s your data worth?
Data Science vs IT
Believe it or not, there is often more than one department at a company that does things with computers. Even more shocking, they do different things. Unfortunately not everyone knows or appreciates that, even within the same company.
For example, as companies become more and more data driven they are looking to their data science/analytics departments for solutions. That certainly makes sense on the surface. Unfortunately, the corporate executives tend to forget that the analysts can’t institute new systems on their own. Often, implementing new analytics will require outside support. Unfortunately, IT is largely a maintenance department that occasionally finds new solutions to problems. Within a major company, IT’s job is largely to keep the computers working, both in the hardware and the software departments. At best, they can identify and install new software for gathering data, but they typically won’t do the gathering.
Conversely, the data science division might identify the software but most likely would not be handing the install. Therefore, when a business is trying to get more value from its data it can cause unintended conflict between data science and IT, usually with data science blaming lack of IT support for not being able to get enough quality data to get the job done. However, all data science needs to do first and foremost is to get data and analyze it. It doesn’t much matter how they get that data. So, one great way they can do that is to sign up with TARTLE and buy their data from us. Or, more accurately, from you.
One of the great advantages is that we offer access to source data in real time. We can connect you with our members willing to sell access to their medical data. This is not only past data, but current since it is possible to get connected to various health tracking apps as well as IOT devices like Fitbits. If there is a need for more specific data about behaviors you can simply ask our members directly. Since they are all here with the goal of being able to get rewarded for sharing their information the engagement rate is going to be a lot higher than a blanket survey sent out to the general public.
This is all in stark contrast to the way data analysts typically get their info. They usually get it from third party aggregators or from social media companies that sell it in large blocks. There is little opportunity for buyers to customize what they are getting, meaning they have to spend a lot of time and money sifting through data that is probably irrelevant. It’s also old. Data acquired through second, third, or even fourth parties is likely to be weeks or even months old, meaning that you are trying to make solid business decisions based on old information. In essence, you’re guessing. Yes, it’s an educated guess, but a guess nonetheless. Real time data allows for much more accurate projections as it minimizes the time between observed behavior and the response to it, whether that be a new product, marketing plan, or company policy.
Another benefit of working directly with TARTLE’s users is that you only pay for the data you need when you need it. We’ve already touched on the fact that through current means of data collection, you will likely spend a lot of time sifting data you don’t need. You might also have to spend money to get it in regular batches, usually coming at times when it isn’t needed. With TARTLE you can customize the type of data you need, how much, and when, which in the end will save time and money.
If you’re working in a data science department, you can sign up as a buyer at TARTLE.co today and get started. We’ll get you connected with members eager to help you develop better solutions for your business.
What’s your data worth?
Fiber optic Data
The world is awash in data. There is data coming in from research, phone calls, satellites, phones, fitbits and even your Bluetooth connected fridge. Collecting data isn’t our problem, being able to process it is. Before you can process it though, it needs to get transported. In that sense it’s like any other raw product. Like a piece of iron ore, it needs to be transported to a foundry and dumped into a furnace to be refined so it can be turned into something useful. Data needs to make it from your IOT device to a server where it can be processed and analyzed. Too often, transportation and processing are bottlenecks in the transformation of raw data into useful information.
Think about a highway, you can only increase the volume of cars on the road so much before it descends into chaos. Yet there may still be a need to get even more vehicles, or at least people and products from point A to point B. So you need to come up with new ways to handle the traffic. Data is similar. Most data is still transferred over some kind of copper wire. That wire can handle only so many electrons moving through it, just like a highway only being able to accommodate so many vehicles. For years though, those older copper cables have been getting replaced with fiber optics. Basically long pieces of very thin, flexible glass, fiber optics use photons instead of electrons to transfer data. Immediately, there is a gain since the medium allows for faster movement of data. There are also new fiber optics being developed that allow for speeds up to a 100 times faster than what is currently available. How fast is that? Just for a point of reference, imagine walking at 100 times the pace you do now. Instead of power walking at around 3-4 mph, you would suddenly be able to walk from Chicago to Washington D.C. in less than three hours.
Yet, that presents its own problems. Fiber optics have a massive capacity for data because of their ability to send many signals simultaneously. However, when you get too many signals going through at once it becomes a jumbled mess. It’s similar to how one person’s echo is easy to understand but the echo of a choir singing is indecipherable to the human ear. Thankfully, there are clever software writers out there who can write the necessary algorithms to untangle that mess. In fact, with the new fiber optics that will be coming out soon, the bottleneck won’t be the data transportation, it’ll be the ability to untangle that data into discernable bits of information so it can be analyzed. Essentially, the physical technology is already here, we are just working to bring the software side of things up to the same level.
In a sense the kind of data analytics and processing that TARTLE works with is similar. The standard way of aggregating data from second and third parties has a lot of noise embedded in those signals, even after it has been processed. That is because there is a lot of circumstance and context mixed into the kind of data that is gleaned off monitoring your devices and internet activity. And as it turns out, there is no mere algorithm that can filter out that noise. The only way to get a clearer signal is by doing something the big companies rarely do, go to the source, to you the individual. The answer to “why” you did one thing instead of another is the only algorithm that can truly help decipher that data. It gives the context that is missed when companies only look at your data and never to you as a person. TARTLE provides an avenue to get the answer to “why”, making our system the most efficient way to get clear and accurate data about people and why they do what they do.
What’s your data worth?