Customer Login  |  

by Matt Manning

These days, advertising technology (both serving and blocking) is generating a lot of discussion. Most falls into two categories: “I Hate Ads” and “Publishers are Screwed.” Both approaches add zero to the very real discussions about new models for delivering content held behind closed doors at the major content creation firms.

If I were a fly on those walls (as I once was*), here are some ad delivery models that I would explore to give the content creators the ROI they need without alienating their readership:

  • Short chunks of text inserted mid-sentence in the body of articles
  • Non-pop-up interstitial messages
  • Ad copy embedded in the image files in articles so when the articles hit social media channels, the ad message gets wider exposure
  • RFP services that deliver qualified leads to vendors
  • Brief “registration” or other required surveys before accessing premium content, asking questions of specific interest to advertisers

This last model appears pretty regularly these days, and I think it’s the closest thing to a workable consensual agreement between the two parties: if you (the publisher) let me access your info I will give you accurate information about my purchase intentions, my view of market trends, etc. This model has the virtue of being a simple variation on the battle-proven “controlled circulation” surveys that are the bedrock of the B2B publishing industry. It’s not as much of a perceived hurdle than registration walls, it’s transparent, and it vastly improves the relevance of ads displayed. Best of all, it can be used over and over again to ask different sets of questions, building deeper and deeper profiles of readers and the issues and challenges they face.

The technical issues of ad serving, cookies, blocking mechanisms and their ilk are, IMHO, irrelevant because they will always be changing. The ability to skip network TV ads with DVRs will lead to more product placement within the shows themselves as inevitably as Winter follows Autumn. This will no doubt cause another reaction that will lead to yet another model variation in an endless cycle of thrusts and parries.

What’s constant for content creators in a given marketplace (bathroom fixtures, politics, footwear, whatever) is their need to know and support all the players in that marketplace. This includes the buyers, the sellers, and the thought leaders working for both buyers and sellers. This can’t happen without accurate, current, in-depth information about the whole marketplace. Gathering this data is key to the entire premise of publishing. We should to start thinking about this process as less “readers versus advertisers” and more as an information-oriented version of “leave a penny, take a penny” where valuable intention data is freely given in exchange for access to insight on the aggregated results of all the surveys of tens of thousands of other players in the marketplace. This may not qualify as “community,” but it is a workable symbiosis that benefits readers, advertisers, and the innovative publishers bringing them together.

* In 1995 I launched hoovers.com with a business model that included a robust free tier of information to attract potential subscribers to our paid tier. I incorporated display advertising in the free tier, even though at the time there wasn’t any significant advertising on the Web. Less than a year later, though, the first animated gif arrived. Soon afterwards, advertising became a substantial, free-flowing revenue stream that eclipsed our service’s subscription revenues for the next couple of years and ushered in the era of ad-supported content on the Internet. Like so many information services since that time, Hoover’s changed their model and advertising is now insignificant part of that new model.

{ 0 comments }

posted by Shyamali Ghosh on October 31, 2015

by Matt Manning

A few years ago, I spoke of a promised land of interlocking APIs aggregating disparate yet authoritative sources of information, so that information services could provide up-to-the-minute data accuracy. If a corporate office were to move or an executive be promoted, those changes could be reflected far and wide and accurately. I now believe that future is only five years or less away from becoming reality.

The recent and long overdue move by the IRS to make Form 990 data on millions of U.S. non-profit organizations open and publicly available by 2016 may be the tipping point. It will let thousands of firms, both information services and potentially every CRM licensee, to keep their databases up-to-date in near real-time using the direct integration of multiple APIs, or, more likely, with the help of API aggregators.

This dataset may prove to be the straw that broke the camel’s back. The decision on 990 data sets the stage for the inevitable release of core IRS data on all U.S. taxpaying entities. When this massive dataset becomes accessible via IRS-based API, it will allow dozens of critical datapoints to be updated instantly using only an entity’s employee identification number. Changes to a company’s legal name, executive staff, age and gender of executives, addresses, phone numbers, etc. can all be updated at nearly the moment of the change.

I believe that in time, companies will widely use their own managed APIs. This will ensure that standardized, current, accurate company information is readily available to satisfy the demands of customers, partners, investors, and federal regulators. The infrastructure for the central government’s internal need to require a notice of receipt of various official notifications is accelerating this change.

In this world, custom B2B information services can avoid the great expense of maintaining huge, unwieldy databases. Instead, they can focus on building richer, unique data based on in-depth surveys, inferential algorithms, and other indicators of “intention.”

{ 0 comments }

posted by Shyamali Ghosh on October 7, 2015