Data, it’s kind of a big topic here at TARTLE HQ. It’s the main thing we talk about and work with every day. That’s because we recognize two important things: one, that organizations of all kinds are making ever more conscious use of the available data in their decision making and two, that there is more and more data available.
As these organizations have come to realize the importance of their operations they’ve begun finding more ways to gather it faster and store more of it. This has given rise to fiber optic lines getting run through the mountains or under the oceans in an effort to gather data faster. In some applications such as the stock exchange, trading firms even take distance to the main servers into account when building a server hub. Things happen so fast that fractions of a fraction of a second matter. This also has of course given rise to massive server stacks around the country so all of this data can be stored for later analysis.
Those stacks and stacks full of data have spawned a whole new field known as data science. In theory, the whole point of data science is to sift all of that collected and stored data in order to get usable information out of it. Naturally, there is a whole process to that. Yet, is the whole process really necessary? Or more accurately, how helpful is it? First, let’s take a quick look at it.
The first step is to identify an issue and come up with a hypothesis as to the underlying problems. The next step would be to gather the data necessary to test the hypothesis followed by refining the data into an easily digestible form. This in turn is followed by identifying and refining the appropriate algorithms that will then be set the task of finally identifying solutions.
So, does this process make any sense? If we were only analyzing new animal migration patterns or the behavior of oxygen at a particular temperature, it would be perfectly appropriate. That basic process (with some modifications to make it fit the actual scientific method) would be perfectly fine for dealing with the material world. Even when dealing with people it might be okay if you were content with generalizing large groups. Yet, how many people like being just lumped in as one number in a group? We’re pretty sure no one likes getting generalized. Not to mention, why be content with a generalization? Surely we can do better?
Fortunately, we can do better. TARTLE exists because we can and should do better. What if we told you that there is a way to cut through the guesswork, to cut through the hypotheses and the refining of the algorithms and visuals and get straight to the heart of the matter? What if you didn’t have to guess and generalize? Good news! All of this is possible precisely because we are dealing with people. Why does that matter? What is different about people? What can you do with them that you can’t do with birds or oxygen atoms? You can go straight to them and ask a question. Even better, those people can answer you.
How does this work in real life? Simple. You suspect there is a particular issue your organization can help with. Since you know the issue, you know the demographic you need to talk to. Since you are already part of TARTLE, you identify the group that fits what you need and ask them a series of relevant questions and they give answers. You take those answers which might confirm your issue or point to another one altogether and since you were smart enough to also ask about suggested solutions, you take those suggestions and see if those answers point to things your organization can in fact deal with. If the answer is yes, you roll out a proposed solution to the same group of people who either affirm the solution or suggest refinements.
If you notice, the process is actually similar. However, with TARTLE, the guesswork stops almost immediately because you are going directly to and working with people instead of operating on hunches. You get to go right to the source to get the best data possible so you can come up with the best solution possible.
What’s your data worth?