The Keys to Effective Data Science Projects – Part 4: Explore the Data

http://docs.statwing.com/examples-and-definitions/t-test/

We’re in a series on the “Keys to Effective Data Science Projects”. We’ve identified the question we want to solve, and made a preliminary pass at the data we need to answer that question. Next we brought in that data to a central location we can work with. We now want to explore that data.

This is a primary difference between Data Science and Business Intelligence (BI).  In BI solutions we often find our source data, then alter it to fit into a desired aggregation, and then perform the aggregations to fit the type of queries the users will ask. This process is called “ETL” – for Extract, Transform and Load.

In Data Science, we *don’t* do ETL – we do ELT. We extract the data (the last step) load it, and then leave it alone (for now). We just extract and load it. Next it’s important to simply look at the data.

Now, if you’re a data professional, this will be one of the hardest steps you’ll ever do. We’re trained to look at some data, query it, and probably create some reports – we try to find meaning. And that’s not what we want to at this phase. What we need to do is to explore the data to simply find out more about what it is. This is a deceptively simple statement.

We’re not looking to find meaning in the data – we’re looking for the meaning of the data as a source. And we do that by opening the data, and documenting it. Just that.

Here are a few questions to get started that you can use to do that:

  1. What is the source of the data? Where did you get it?
  2. Why did you get it from there?
  3. How is it structured, or not structured?
  4. If it has “rows”, how many are there?
  5. If it has “columns”, how many are there?
  6. Are there any missing values? Where? How many? How many as a percentage of that column or those rows?
  7. What does each row represent, if you simply read it?
  8. What does each column represent, if you simply read it?
  9. Can you find out what it really means?How? Who would you check with?
  10. If it has numbers in it, are there any aggregations that seem to make sense? (sums, etc.)
  11. If it has numbers in it, are there any descriptive statistics that make sense? (average, standard deviation, minimum, maximum, etc.)
  12. If there are numbers, what are the distributions (quartiles, etc.)
  13. Are there more than one set of data?
  14. Are there any “natural” join methods between the data sets?
  15. Do we have all the data that covers all the data points needed for analysis?

We simply cannot move forward with any analysis until you understand this data. We’re going to be using this data in a statistical sense, so the reliability of this data, the spread, the centrality, and the sizes you’re dealing with are vitally important.

So now – how to do this? There are lots of mechanisms you can use, from R to Python, from Azure ML to Excel. The technology is not actually that important – it’s more that you answer the above questions (and many others).

You will, of course, need to document all this. Personally, I’m using the Azure Data Catalog, but the bigger point is that you do that. We’ll use this in the next series of steps.

So isn’t this just a standard part of the process? What makes this a “Key” thing to do? It’s because I find that this is often lacking in Data Science projects. When we test a Machine Learning model and it does not perform well, most of the time I simply go straight back to the source data process – and I often find the problem there. It’s Key because it’s vital.

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.