The quest to understand and predict intent is as old as life itself. The fundamental challenge has been to decipher the workings of our inaccessible, mysterious gray matter. Our brains build a constantly evolving “model” of the world around us, as the source of all intent. Everything we see is an expression of this model whether it’s spoken or written language, or twenty-first century incarnations such as browsing, posting, Liking and buying online.
It’s been the domain of the “seers” of every age to make sense of what we can observe, and divine what’s behind it all. Starting with the philosophers, Renaissance thinkers, and writers and continuing with the economists, analysts and marketers, they have all had – or continue to have – their say. The latest in this genre are the Big Data scientists. What we “see” are exabytes (i.e. 1 followed by 18 zeros) of data about what humans do. The good news is that there’s a lot to see; the equally bad news is that there’s a lot to see!
Datasets of this size are way beyond the scope of human analysis, so we call in the machines. This presents a paradox: if we need machines, and we’re seeking an understanding of human thinking – isn’t that a catch-22?
In the beginning, there was the keyword
As early as the beginning of the 1990s, various permutations of Artificial Intelligence (AI) came into prominence along with its brethren, such as Natural Language Processing (NLP). With this came corresponding challenges posed by classification-schema limitations and issues with scale. At the same time, the massive movement online proved very real, along with very real advertising dollars – particularly in the area of search.
In response, the marketplace ushered in the era of Big Databases, which basically took a page from same old AI framework: if we cannot emulate the way humans think, we can at least enumerate the possible ways it can express itself. And so keywords became the currency of the Internet, organized into categorical hierarchies, ontologies or taxonomies.
The challenges are clear. Long, enumerated lists must be manually updated and curated; language is complicated and the same keywords can signify different intent; and every context must be covered for ambiguous words. Even thornier is the struggle to cover discrete actions and keywords that represent the same user intent. A “healthy appetite for knowledge” and “having a healthy appetite” are miles apart in intent, but too close for keyword comfort. Imagine coding all nuances into lists to enumerate all possible intent and contexts – and how challenging it is to get this right for ad targeting.
Concepts as the new currency
Fast-forward to the present. Today, at NetSeer, we view Big Data as our friend and use it to model the equivalent of an evolving, collective “brain” – our ConceptGraph. What distinguishes the human brain is its associative nature. Objects alone, such as words or phrases, do not give meaning or express intent. Meaning lies in the Gestalt, or cluster, of objects: the Concept. Concepts form the basis of our ad targeting.
Putting this into practice, let’s look at “twerking”. As a pop-culture phenomenon, twerking doesn’t reside in isolation in our memory. Instead, the Concept of twerking exists only when associated with other words and ideas, such as “Miley Cyrus”, “Grammy Awards”, “music”, “career-defining moments” – and perhaps others unfit for print. Each of these ideas, when associated with other sets of words, forms distinct Concepts of their own. Meaning lies between these natural connections, giving rise to Concepts as a currency of human intent.
As an organic, predictive “intent engine”, the NetSeer ConceptGraph emulates human thought-patterns to create associations between related ideas, and accurately detect the intent behind any content, user actions, or collective lookalike patterns. That translates to greater targeting accuracy and better campaign performance.
It’s evolution, dear Watson.