« Home | Time Series Forecasting 3 - Multi-step Forecasting... » | Oracle Data Mining Tutorial and White Paper » | Explaining Predictions with Easy to Read Tree Rule... » | Bioinformatics the SQL Way » | Now Available! - Oracle Data Miner Release 10g Rel... » | Automating Pivot Queries » | Time Series Forecasting 2 - Single-step Forecastin... » | The Democratization of Business Intelligence » | Running Oracle Data Miner on the Mac » | Real-Time Scoring & Model Management 1 - Life Cycl... » 

Sunday, May 14, 2006 

Trusted Analytics - Bean Counters vs. Experts

Analytics, such as data mining and statistics, produce interesting and useful results. However, can we trust this type of technology and the applications built on top of them?

Here is the issue: data mining and statistics do not produce "exact" results. In fact, the results one gets when asking the same question using different techniques are usually different. To top it all, computations can be order dependent and non-repeatable across different platforms. This is at odds with most computations usually performed in a database. These are repeatable and exact. For example, the number of sales per country is not order dependent and do not change due to differences in algorithms. In a way, we could compare these computations to bean counting. There is no guessing, estimations, or approximations. From this perspective, for most types of calculations, the database is a sophisticated and powerful bean counting device.

How can we come to terms with the "inexact" or approximated nature of the results generated by advanced analytical techniques? Interestingly enough, we are quite used to this type of behavior in our day-to-day life. It is not uncommon for us to enlist the services of an expert when dealing with complex tasks or activities. For example, we routinely seek advice from medical doctors and investment advisers. Also, expert advice is not exact and "repeatable." It is not unusual to get as many different answers as the number of experts we consult. How do we manage these interactions? Here is where trust comes into play, Trust is developed over time through a history of successful interactions with an expert or acquired through referral. In the latter case, a trusted party vouches for the quality of the expert. When we do not trust the expert, seeking a second opinion is a common way to handle the "inexact" nature of expert advices. This approach can be viewed as "expert pooling." In these cases, we either take the "average advice" or the majority advice. We also usually weight each expert's opinion by how much we trust the expert.

If we rely on experts that provide inexact advise, why not software? Working with analytical components requires a similar concept of trust. We need to feel reassured that the technology we are using, or a particular component, can be trusted. This can only be developed by using the technology over time and realizing that it works or, again, by referral. The later can be the reputation of a company, or the validation that an approach works for solving a type of problem. We can also apply "expert pooling" to analytical results. For example, we can use different techniques and take the average or majority result. We can weight each technique's contribution by how much we trust its results, as well. Suppose we want to estimate house value y based on house characteristics x. We could create two models (model1 and model2) using two different techniques. We could then compute the y as model1(x) * t1 + model2(x) * t2, where model1(x) and model2(x) are the estimates generated by model1 and model2 respectively, and t1 and t2 are how much we trust each model.

One way we can assign trust to a model is to compute its performance on a held aside data set not used for creating the model. Let's say e1 and e2 are the performances for models 1 and 2 on a held aside data set then we could compute t1 as e1/(e1+e2) and t2 as e2/(e1+e2).

In summary, in order to address the inexact nature of analytical software results, we need to develop an attitude similar to the one we use to deal with expert advice. We need to understand that these techniques are providing advices or answers to complex questions. Like expert advice, and unlike bean counting, there is uncertainty and margin for error in the answers. To manage the uncertainty, we need to cultivate trust and apply expert pooling strategies. Trust will increase as the use of analytical components become more prevalent. The latter requires making these techniques easier to use. This is the topic for a future post.

Let me know what you think is necessary to address the "inexact" or approximated nature of advanced analytical results. Do you think it is all a matter of trust?

Great comment Marcos - this is why "engineering" models for explainability is so important. Saying "we did X because the analytics told us to" is no good, you must be able to explain the analytics just like you need to explain the policies and procedures (rules) you used.
See this post for more

James,

Your post addresses a very important point I left out in my post: the role of transparency. This is another aspect of building trust that needs to be addressed. My comments focused on the acceptability of the nature of analytical results. But, as you pointed out, after that, we also need explainability (transparency) of decisions. Transparency is probably the most important driver for the widespread use of decision trees and rules-based decision systems.

Hi, I am the 'JAMES' in http://www.it-director.com/technology/applications/content.php?cid=9025&mode=full&hilite=13090.

I am coping my reply to your post here:

--------
Marcos, are you working in Boston Oracle Data Mining team? I don't know you, but I happend to know your manager (do u know of Jacek , if you are really an Oracle Developer?).

Marcos, I know what I am talking about. I have indeed worked on that for 2.5 years, for commecial production development. I used ODM from Oracle 10g Beta 1 to release 2 production.

Oracle Data Mining is not rabbish (sorry if it's offensive), but Oracle Data Mining is not something I would recommend to anyone. I hate it a lot personally anyway.
-----------

Post a Comment

Links to this post

Create a Link

About me

  • Marcos M. Campos: Development Manager for Oracle Data Mining Technologies. Previously Senior Scientist with Thinking Machines. Over the years I have been working on transforming databases into easy to use analytical servers.
  • My profile

Disclaimer

  • Opinions expressed are entirely my own and do not reflect the position of Oracle or any other corporation. The views and opinions expressed by visitors to this blog are theirs and do not necessarily reflect mine.
  • This work is licensed under a Creative Commons license.
  • Creative Commons License

Email-Digest



Feeds

Search


Posts

All Posts

Category Cloud

Links

Locations of visitors to this page
Powered by Blogger
Get Firefox