Valuation of Intellectual Property (Get money for your brains)

Featured

human brain with arms and legs been playing pokerThe Valuation of Intellectual Property

You can Measure the Intangible

Many companies grow and nurture thanks to something that sets them apart: a product, an idea, a way of doing things. This can often be an idea, a method or a set of curriculum. These intangible can be hard to translate into an actual financial value. Many startups (and larger companies) hence sit on a hidden treasure and carry their innovation at no value, making it more complex to build a commercial, financial or VC agreement.

This should not be the case and there is a comprehensive field of knowledge that could help companies assess and recognize the value of their Intellectual Property and more broadly Intangible Assets.

The discipline of valuation is populated with many experts who can help determine a substantiated valuation of their intangible assets or intellectual property. Moreover, the value of these intangible assets can be recorded into the audited financial statements, amortized and treated as any illiquid asset.

This paper is aiming at showing entrepreneurs and established companies that their IP’s value can be measured, should be protected and finally, should be recognized in their books.

The Key Methods of Valuation and Characteristics of IP

Market valuations have in common that they really do not take into account the opinion of the creators. Of course, everybody’s baby is the prettiest, but this is business: the true value will always be the one that someone is ready to pay for the asset. A common misunderstanding is to calculate the value of Intellectual Property based on the cost of its creation. Although it might have some bearing, this approach is dead wrong: if you spend thousands of hours crafting the perfect fishing fly: if nobody buys it, its value is zero, even if its cost is sky high. So, let’s figure out how to put a price tag on Intellectual Property.

A problem can be that you want to assess the value of an IP asset without having to put it for sale for real. The valuation methods then include the comparative analysis with a relevant commercial transaction (Market Based Valuation), and / or the calculation (internal / accounting) of the income generated and to be generated until the end of the Useful Live (Remaining Life Value).

Key characteristics of an (intangible) asset must be well understood to master the valuation process, such as the Useful Life of the asset, especially its subset, the Remaining Useful Life (RUL). Some assets retain their full value forever (e.g.: royalties), while others have a life-span limited by other factors (competition, technology obsolescence, etc.).

Another (and related) aspect is the accounting treatment of the asset. The natural obsolescence of an asset implies that the asset upon the end of its useful life, will reach a net value of zero; the Decay is the amount of loss of value over time, until the final null value. Whether the decay is linear or goes by a cycle (e.g.: exponential decay, starting slow and gaining speed over time) will have a major impact on the calculations of valuation.

A rapid dive into these characteristics will help us define more accurately the value of an Intangible Asset. By the end of this piece, you might have a better sense of how much money your brainchild is worth.

The Useful Life Estimation

The Useful Life of an asset is the time during which the asset is creating / producing value, from beginning to end. It can be broken down into the following categories:

Category

Description

Legal The term of the IP / asset’s legal registration, as defined by the law (e.g.: patents, trademarks). The terms of the law define the Useful Life without ambiguity, unless challenged in court.
Judicial The life and associated costs as established by a judge (e.g.: payment of royalties) with specific value and time limitation boundaries (or not).
Contractual The aggregate terms of all existing agreements creating a business agreement between the “owner” of the asset’s rights and business using the asset. This could vary over time as new contractual agreements are generated.
Technological The combination of the effective life of the technology supporting the asset including its operating environment.
An application developed in COBOL language and requiring a Mainframe computer to run would likely have reached its useful life end by now. This value varies mostly due to technology advances.
Commercial / Economic The period during which the asset / property generates revenue or creates positive income. This value can vary with market dynamics and competitive pressure.
It could be complex to determine whether an asset reached its end of life, or if the revenue generation / sales associated are less efficient.
Analytical Historical and contextual comparative analysis of the total useful life and parameters of similar assets.
If the asset is a complete novel concept (Blue Ocean), then the comparison will need to rely on comparable assets life cycle, if any. In the absence of historical or contextual comparison, this approach would not work.

The table above reflects the Total Life of the asset. In some cases, the valuation of the asset is taking place after the beginning of the useful life of the asset; in such case, the useful life will be the total life minus the time already gone during which the asset was creating value (the Incurred Life of the asset). For instance, a product that carries a 10 years Useful Life, three years after its launch, will retain 7 years of RUL (Remaining Useful Life). The recognized formula is: RUL = (Total Life – Incurred Life).

When an economic approach analysis is used, the RUL should factor in the value of the obsolescence. In this context, the terms (duration) of the projected period of income actually defines the RUL.

Specific classes of assets (and groups of assets within the same class) might have different longevity and decay patterns. For instance, a methodology or process could evolve to adapt to changes to the technological environment while others could be replaced with a new approach, etc. Typically, assets which are tightly coupled or dependent on rapidly changing (e.g.: software, technology, business intelligence) are the most vulnerable to rapid obsolescence. Hence their RUL is going to be a lot shorter. On the other hand the use of a name or capability can generate royalties which can be lasting as long as the name remains, with little or no decay. This can have deep implications into how an intangible asset is “packaged” for a commercial transaction to maximize its value.

The RUL can be used in the Market Approach to assess how comparable are potential candidate transactions to be used as guidelines, providing guidance to potential adjustments making the guideline transactions more effective as a comparison.

As a side note, although there is an effective unlimited legal life of IP, its economic life is usually shorter. The asset might remain, but its value could be gone already.

The Cost / Accounting Based Valuation: Calculating the Decay

The rate of decay over time is an important factor in an Economic analysis: assuming a targeted obsolescence date (e.g.: 100% obsolescence of a training curriculum over 10 years), a projection at 6 years would give a 6/10 or 60% obsolescence, assuming a linear decay.

Obsolescence can be estimated as the ratio of age to total expected life. Detailed analysis might be necessary for a business, especially when the actual asset is separate from the income generation; for instance, a testing methodology which supports of an Application Maintenance service offering, contributes to the income generation, but not in an exclusively. The recognized contribution of the asset to the income stream is the value that needs to be carried in the assessment. A complexity could arise when the useful life of one of the entangled assets (the overall service offering) has a Useful Life which is separate from the other asset: the Testing approach might remain after the Offering has been removed or replaced. In the same time, the same Testing IP could be used in multiple, separate Offerings, increasing its useful value over time.

The individual RUL and decay ratio need to be explicitly called out, which could lead to somehow complex valuation efforts.

Known Transactions for Analytical (Market) Valuation

A key question when preparing the valuation of a business, a class of a group of assets is whether comparable commercial transactions (transfer of price or value) exist, that can be used as a guideline.

A past transaction such as an M&A, a licensing agreement, assigned royalties are examples of transfer of value against a monetary equivalent. If the transfer is related to a class of asset or a business which is fairly comparable to the IP valuation in discussion, then it can help anchor the valuation of the price.

Multiple transactions are even better, as it is unlikely that a single transaction would be a perfect match, and each transaction can be compared to the subject of the study to determine a market-based calibration. We saw earlier that factors such as RUL and Obsolescence (Decay) must be thoroughly analyzed to calibrate the comparison, especially in the case of entangled Intellectual Property assets.

In particular the actual price for the transaction, but also the estimated remaining life of the asset are key elements for the valuations methods illustrated below.

The remaining useful life (RUL) of the asset can be used to determine:

  1. The duration of the period of income / revenue generated by the asset. If the revenue will not cease to be generated, than the useful life of the asset is perpetuity. (The useful life of an asset can be shorter than its legal equivalent, as IP can have an unlimited legal life).
  2. The date at which the residual value of the asset will reach zero will trigger the calculations of the “decay”, or the depreciation of the value over time.
  3. Internal or external factors that could alter the useful life or the income generated by the asset. A great concept for instance, could create a large market value (e.g.: the Apple iPhone), but as soon as competitors enter the market, the net value takes a plunge, as the organic market for the product just got smaller. Both Value and Useful Life would be impacted in this case, as the life of the asset would become shorter, and the revenue would be shared with competitors (e.g.: Samsung).

So the Comparative Valuation of an Intellectual Property “object” is primarily the substantiated analysis of the price paid in comparable instances for similar IP, correlated with the impact of uniqueness, desirability, stability of the income or competitive pressure (commercial or technologic).

In some cases, examples of recognition of a similar or comparable IP can be captured in the financial statements and declarations of other companies or organizations. Although they might need to be calibrated in the same way a comparable sale or M&A would need to be adjusted, this information would greatly reduce the effort of establishing a comparison guideline.

The Income / Revenue based valuation

The number of periods during which a projected income will happen (the Intellectual Property RUL) is practically defining the Asset Value. A longer RUL results in a higher value, because there is more economic income to be capitalized. The same potential annual revenue generated by an IP Asset, over a 5 years Useful Life will be half of the value generated by an similar income stream over a 10 years Life.

If the asset is used in creating a new market segment, the growth of the segment over time will trigger an annual income growing every year in absolute value. This becomes more complex as the valuation will rely on estimates of market growth, which can be less “tangible” than projected current revenue for a mature product. It is predictable that in such case the valuation of the Intangible Asset will need to be revised periodically, as the economic context is evolving.

Because the expected future economic income is discounted to the present, the total value of the asset is very sensitive to changes in RUL, especially for short periods. An innovative asset which factored market growth in the valuation, and which is suddenly facing a reduction in half of its useful life due to competition or technology changes would see its value likely reduced by more than half.

The rate of decay of an individual intellectual property asset in an income based valuation would be the expected reduction of the amount of income generated related to the use of the asset over time. It is not unexpected that by the time the asset will be replaced or retired (the end of the Useful Life), some income (although reduced) would still be generated. In such case the decay would not reach a null value but a reduced value.

So, How Much is My IP Worth?

Various methods and parameters of Intangible Asset valuation are described in the sections above. In practical terms, these are usually used simultaneously and complement each other. Unless the substantiation of the valuation is non-ambiguous and relies of a large number of absolutely comparable transactions or disclosures, there always remain some level of ambiguity in the assessment. The potential bias carried by one method can be greatly reduced or eliminated by using at least two completely independent valuation methods, relying of separate criteria and parameters.

The value to be estimated will always be the Residual Life Value (RLV): the value to an analyst or a potential “buyer” will be the value to be generated during the Remaining Useful Life, with no practical regard for when the Intellectual Property asset has been released or launched.

Because the RUL is likely to be more than a fiscal year, it is advisable to calculate the Present Value of the RLV, especially if the value is compared to similar assets carrying various life expectations and remaining lives.

So how do we get started?

A first step would be to describe and qualify a candidate for Intellectual Property Asset. The Asset must be clearly identifiable, without ambiguity and remain stable in its nature and characteristics during the period of reference. Updating a methodology or upgrading the operating system of a computer does not really change its characteristics (although it might expand its Useful Life).

Then the asset need to be linked in direct or indirect (contribution ratio) to a revenue stream, which will carry the value. The evaluation of the Useful Life, and in particular the Remaining Useful Life of the asset will determine both the duration for all calculations, and the range of revenue recognition for income streams. Analysis of factors that could reduce or alter the RUL or / and the generated income are critical to determine a credible Useful Life.

The Decay or Obsolescence of the asset’s value over time, and its decay profile (linear, exponential, YoY ratio) will determine the value for each period of income, and need to be thoroughly analyzed.

Finally, additional factors such as external (competition, market, technology) and internal (evolution, change of direction, strategic shift) can impact the value and decay, in a positive or negative way.

Sourcing the analysis of the valuation to a registered professional is important, especially if a potential commercial transaction, partnership, negotiation with a bank etc. is on the horizon. Self-evaluation is always a good practice, especially for the “light bulb” factor, when some of the implications of this valuation dawn on the owners of the IP.

Take your IP to the Bank

Now that you have performed a dual analysis and settled down on a number for your IP in Present Value, the One Million Dollar question is (as a matter of fact), how to make it appear in Financial Statements, especially those checked by investors, banks and potential business partners?

A simple answer is: just like that. Create an entry into your Assets List (either under Other Assets or Intangible Assets) that records each Intangible Asset separately, with its final valuation number. The Balance Sheet is a good place to start, where Intangible Assets can be reported into the non-current assets (current assets are either cash, Account Receivable or liquid assets that can be converted into cash within a year or so). The depreciation can then be calculated based on the Decay and the evaluation of the economic cost / amortization over the Useful Life, as seen earlier. This is where the accounting treatment of the asset matters, as choices of amortization, depreciation and other factors can impact how you are reporting the asset’s value. This level of details is beyond the scope of this paper, but your CFO or Accounting should be able to help you navigate through the options. You might want to watch the Year-on-Year changes to the valuation, if you decide to do periodic revisions; these are now declared assets in your books, and any change will need to be recorded in proper fashion, so it can be audited and verified.

Changes to the disposition of the asset, such as trade agreement, licensing, royalties etc. that would impact (positive or negative) the valuation or Useful Life of the asset should be treated like any other transaction impacting your books.

Now you have a firm valuation number for your IP that withstands scrutiny, is reflected into your Balance Sheet as an actual Asset and adheres to GAAP. By the time we got here, you probably have dropped the idea that the valuation of your Intellectual Property has anything to do with the hard work and passion you put into it.

Yes, this is just business… But this is business that can get you into a more constructive conversation with a potential partner or investor!

Invest in Developing or Protecting your IP

So you got yourself a dollar figure for your brainchild. How do you protect and support its sustained value? Two dimensions of the valuation can be contemplated, which are (no surprise) the same we reviewed earlier: the Remaining Useful Life and the economic value of the asset.

If your Intangible Asset has a finite useful life, say 10 years for a new technology widget or software construct. You will calculate the value from the income stream over the 10 years of useful life and record this value in your books. But in the 3rd year of the life-cycle, you decide to extend the useful life (generating revenue) of the product, through investing into upgrades and refinements. With this upgrade being deployed, the Remaining Useful Life (7 years at this moment) has now been extended by another three years. Voila. Repeat the process periodically and you might extend the life of your product for a long time, provided that the changes actually extend the commercial life, that is.

If your product is growing its market share, it is likely that the income generated during the three years you added is higher than the revenue accrued during the first years. But it could be the other way around, and the life extension is supporting an aging product, with a slowly shrinking income stream. Either way, the added value will be the income generated each year of the extended period of reporting. Remember however that hopes and dreams are not facts; even if you are wishing very hard that your revenue is going to jump up, economic and financial rules are only looking at facts and hard data. If your revenue stream is shrinking, than the valuation will follow. Make it rebound and record a new growth, and you can revisit your valuation.

So you have a vested interest in not only recognizing the value of your intangible assets, but also in taking the steps to expand and support this value over time.

The flip side of this coin is that the declaration of the value into the books also states the ownership of the asset. Would the business be acquired or sold, the asset is likely going to go with it, along with associated property rights, unless a special clause or agreement states another disposition.

Intangible Assets Valuation is Good For You

The process of assessing and recognizing the value of Intangible Assets and Intellectual Property is not huge but very involving, probably including the use of a professional expert. There are financial and accounting implications, as well as property rights and other collateral aspects which can become rapidly complex. At the core of it however, is a relatively simple and effective process that will put a price on the asset, substantiated with facts and figures.

A practical use of an IP Valuation is to help with a planned acquisition or merger. In addition to the net economic and future economic value of a business or organization, the valuation of the IP of a company can be immensely helpful in choosing between candidates. Many M&A are driven by some level of IP acquisition in addition to the fixed asset. Being able to perform at least an elementary valuation could make the difference between a good and a bad deal.

If the targeted transaction is only an investment, the potential valuation if intangible assets could be precious, especially for startups or emerging businesses. It could also be used to determine if the price is right, and if the candidate for investment actually manages effectively and its intangible assets, which in many cases are the source for future growth or revenue streams.

The most compelling case is probably startup companies, often struggling to convince investors or banks of their value. If the IP valuation does not replace a solid business plan, it can help complement it, offer a more compelling view of the business savviness of the entrepreneurs, and offer an illiquid asset that can boost the initial capitalization of the startup.

In effect, many struggling entrepreneurs might just be seating on a pot of gold, unknowingly.

Your brains have a tagged value: make good use of it!

Rethinking the Information Security eco-System

Snake Charmer Sri Lanka

Security threats have changed in nature and frequency

Information related attacks perpetrated against companies are every day more sophisticated, less predictable. Mere hours separate the identification of a new exploit from the first full scale hacks, and in some cases the exploit is the direct result of advanced, systematic research performed by professional teams worldwide.

As the skills and expertise of professional hackers is growing, they are often supported or owned by organized crime and nation-level organizations.

The funding of such networks of cyber-criminals at such scale outpaces the capacity of any company and most governments to provide adequate protection and deterrence. With a relative impunity, organized groups can select a target and hit it relentlessly until they break in; unfortunately, there is no actual way to stop them short of taking the targeted assets offline entirely, which in many cases is practically impossible.

Those groups can afford to design and create extremely complex automats (robots) which can outsmart any existing defense through heuristics and extensive networks

The late 2013 security breach at Target was aiming at capturing credit card information and codes; unrelated January 2014 arrests at the Mexican border showed a batch of credit card information that had already been resold and was about to be used. The latency of the exploits to reach criminal organizations vanished as cyber-crime is increasingly sponsoring the creation and use of the exploits. An actual open-market for stolen cyber-goods has become normalized and operates using a model not too far from eBay and other open market mercantile exchanges.

Traditional response and response strategies are no longer adapted to the fast moving cyber-crime, and national watchdogs are nowhere close to matching the means and the skills of the newest generation of cyber-criminals. Many security practices are basically the same monolithic ones created at a time where data centers were inexpugnable fortresses surrounded with layers of fences and electric wires.

Then came smart hackers, and the realization that most of the vulnerabilities come from the inside. Either breaking through the weakest individual link (employee or contractor) or supplier / client network connect will provide unhindered access to a vast array of privileged information without much effort. There is not much a company can do to stop this, even with a world-class security. Why bother crawling on your knees and elbows to sneak into the place, when all it takes is someone to hold the door for you, willingly or unwillingly? It is then too late to stop the invader, already in the inner sanctum and who probably took all the precious information away before anyone notices… if it ever happens.

In this new world where cyber-crime is the new normal, a complete economy is being created that operates on the same traditional macro-economic rules with the sole purpose of dealing with illegitimate data acquisition, sale and exploitation. This is a global economy, involving all regions of the world, where market dynamics are based on the value of the data for sale; in this world, the boundaries between crime, espionage and terrorism are blurred and the immateriality of the buyer and seller is assured ther9ough the use of dynamic IPs and other tools enabling cyber-criminals to operate absolutely out of reach and out of sight.

We need to engage into a complete rethinking of our cyber security, not only at the defense and response strategy level, but also and more fundamentally on how we store and use the information today.

 

We all are targets. Nothing personal.

The Department of Defense and Wall Street’s Big Bad Wolves are no longer the primary target of attacks, as ethical activists are being out crowded with espionage, petty crime and terrorism pundits.

All of us are potential targets, for small cash unauthorized transactions, identity theft, Preparation for Denial of Service attacks, or even training ground for cyber-criminal in-training. Even a small amount of loot is meaningful when repeated a few hundred times at once. We entered an age where someone will do anything for money, eventually; organized crime and rogue governments know it and are exploiting it. We better get prepared for it.

The explosive growth of social media and its multiplication of private information into the public domain provides a continuous flow of entry points and vulnerabilities, augmented with the mostly obsolete mindset regarding passwords and protective layers. Marketing dynamics and social media marketing are all creating lists of targets, laden with personal details and preferences.

Past exploits were a combination of politically or socially motivated groups such as Anonymous (Hacktivism), of nationally supported espionage to capture industrial or military secrets and of cyber-criminals trying to find a trove of information to take advantage of, such as credit card etc. Over time, the cyber-criminality has become so effective that the threshold of what makes a viable target has shrunk from a global bank or corporation all the way down to the soccer Mom with a single bank account. The use of tools and “bots” to acquire the raw data has been lowering the cost of this initial capture, which in turn is changing the economics of the hack: what would have been a low key capture is becoming a valuable target simply because it is part of a broader capture. When a single account would provide little value by itself, the association of hundreds of them simultaneously increase the returned value of the effort with a marginal incremental cost. This makes you and I become valuable targets to organized cyber-crime, when we were negligible to the individual hacker in most cases.

A disturbing thought is the blurring of the boundaries between terrorist and criminal organizations, fueled by their common never ending compulsive need for more revenue. As networks harvesting and distributing drugs are becoming less and less distinguishable from each other, when they are not collaborating, cyber-channels are taking the same path, making the definition of valuable data a lot more complex. Your bank account is no longer the only targeted information; your work related accesses, the company you keep, indirect information on companies and infrastructure can be equally valuable, as long as someone is ready to pay the price. Nothing personal, it is business.

With the commoditization and the globalization of technology networks, skills and data flows, a new economy is being born, operating under the same rules and the business economies, with the caveat that its tenets are of a criminal enterprise. Networks and logistics, national and international market places, safe heavens, schools and experts, funding and investments: all classic components of a thriving trade activity are present in the illegitimate data market. Although operating under the radar from most of us, this self-regulated economy has been growing rapidly to the point where attacks and criminal endeavors can be “profiled” by experts. This underworld is actually structured with local actors, international.

In this new world, we are merely passers-by, international networks, resellers and buyers, making all companies, all organizations and all of us a potential target. When Hacktivists would spend time and effort to penetrate the mailbox of the executive team of a targeted company, cyber criminals are more likely to go after a thousand of softer targets, which together create a higher return value. Teenagers being teenagers, there will always be a posse breaking into the Principal’s mailbox for pranks or for tests information. But this is not really worse than climbing through the window a few years ago to get the same result. Hacking into John Smith account to get banking routing numbers and sell them to a gang operating outside of the borders is not driven by anything but elementary demand-supply market dynamics, where volume and usability are becoming the core criteria.

Shifting the security focus

The responses to these new threats need to be in almost-real time to be meaningful, and make the breach sufficiently difficult that the effort required creates natural deterrence. One positive thing about cyber-crime’s reliance on automated tools is that sufficient complexity in the fencing security (like a very strong password) will exhaust most tools’ scripts before breaking in, creating de facto deterrence. There always is the case of the hacker-in-training rising to the challenge, but mostly professional crime is driven by profit and a common business practice is to drop cases you cannot win easily….

Meanwhile old core security strategies of doing periodic assessments and monitoring randomly for threats analysis are no longer sufficient to protect a company’s data. Some of them even contain their own Trojan horse, as the focus remains on creating a very difficult to breach first barrier behind which everybody feel – falsely – safe; when this barrier fails, no protection whatsoever remains, leaving the critical data fully exposed.

Internal security experts (the good guys) are themselves facing the unique challenge that new generations of cyber-criminals are coming up continuously, each of them outpacing them a little more in their skills and capabilities. The training and constant watch of emerging threats and exploits is the new battlefield for Cyber Security experts, so they can at least keep up with the fast evolution of the attack strategies. This is a losing battle however, as long as experts protecting companies and our society are stuck to a reactive mode. The emergence of a new class of threats continuously evolving to defeat barriers being raised requires a whole new thinking on security strategies, as deterrence is becoming an economic barrier and no longer a security frontier.

So here are the news: it is no longer credible to assume that you can find a set of security barriers and responses which would stop all attacks. Over time, your best defense will be defeated, given enough attention and desire from the bad guys. Let’s absorb the implications of this statement for a moment.

The old concept (still very much the only one at play for many companies and organizations) stood on the foundation of a strong, active set or barriers and alarms protecting the core information stronghold. As initial deterrence remains an important disposition of the security apparatus, it created an associated flip side, with all key information and resources completely unprotected would someone breach those barriers. Until a hostile penetration occurs, nobody worries about being exposed, feeling comfortable behind the tick walls of corporate security. Suddenly there is a breach and they are looking in the eye of a hostile bot with utter panic, the best option left being taking the entire organization offline, if this is even still possible, and provided that the threat is not discovered days or weeks later.

The focus of an effective security strategy can no longer be limited to a series of electronic barriers that creates a treasure trove of data for the intruder. It leaves critical data fully exposed to hostiles breaking through. CISOs and other Security architects need to cover all fronts simultaneously, assuming that each of them can be breached. A perfect illustration of the new thinking is when an organization digests the fact that most of the breaches are originating with insider knowledge, shared deliberately or unwillingly with someone who then takes action. It is practically impossible to prevent some employees from surfing into unsavory internet sites, responding to phishing campaigns, not store key passwords on their smart phone and so many elementary security protocols violations. It is sometimes puzzling how people otherwise mature and savvy can be caught red handed protecting critical information with John123 or their birthdate as the password.

Adding to this the pure data leakage such as the credit card information of customers left on a non-encrypted laptop forgotten on the back seat, or the deliberate industrial espionage tools to capture encryption keys, there are too many weak spots to effectively protect all the data all the time. It is therefore a new reality that we are facing, where strategies and tools need to be built to separate data or architect it in ways that accessing to a data store would only give a partial, ideally worthless view of the core information.

Let’s assume that someone will be able to access your core customer information, sooner or later. The question is how to make sure that this access will not provide access to enough aggregated data that makes it become a marketable merchandise for electronic pirates and their customers. Separating logical strings such as the contract terms and conditions from the name and mail address, or splitting the customer base in subsets without universal access can illustrate what a “disruptive” data architecture could be. Using hard encryption on key data sets without which the rest of the data would lose a lot of its value could add to the difficulty and make it more difficult (hence require more efforts /costs) for cybercriminals. In effect, the goal here is not to prevent access (which is another simultaneous strategy) but to make complex and hence expensive enough to re-aggregate the data into a trade worthy loot, rendering the acquisition not profitable enough for the potential hackers.

What is missing is a multi-prong strategy including a dynamic response to ongoing threats, and a whole new philosophy on critical information and storage.

A dynamic response is basically a set of tools above and beyond the usual cyber-security barriers, continuously checking and analyzing the flow of activities in the entire network, to flag single events and abnormal behaviors as soon as they occur. This activity should be automated as much as possible, and smart tools should be leveraged to start matching the cunning talent of cyber-criminals with comparable smarts. Besides an alert system flagging transactions, flows and patterns as soon as they arise, human oversight should continue analysis and observing traffic patterns to continuously evolve the knowledge embedded into the early warning system. Creating networks of guardians sharing information and tips as they emerge, the collective response would grow significantly, along with a direct reduction of the potential impact of a new penetration routine. National Cyber-Security watchdogs would also be able to provide inputs, as well as benefiting from the broad network of individual custodians of corporate security (national and federal organizations being chronically under-funded and ill-prepared for taking on such broad coverage).

The primary benefit of an early warning system is that you can catch a threat as it occurs, and before too much damage is done. Needless to say that the earlier the catch, the less repair and cleansing will be needed, but it also enables to shut down the backdoor fast, as well as being able to capture the details of the penetration threat and therefore get a much better sense of where it comes from and how it got around the first perimeter of fencing. This must remain a learning system, not a static moat.

A secondary benefit is to fine-tune the early warning system itself, enabling it to catch earlier signals and outsmarting more covered “agents”, eventually matching closely the creative skills of the new generation of hackers.

Another and equally critical component is the complete overhaul of the data and information architecture within the organization. Today’s IT organizations and architecture are primarily designed from the vantage point of Technology being the custodian of Corporate’s data. While this situation has historical merits and rationale, it truly did not evolve much over the past decades, creating huge repositories of original data in massive technology silos. The classic alternative of having individual users create and store their own data being an even worse scenario, nobody has been effectively questioning the fitness of such architecture in an increasingly online and virtual world. There lays the fundamental issue: the very concept of data custodian, central to most organizations, is generating fat sitting ducks for cyber-hunters on the prowl. Would one of them breach the cyber-fences, and the entire set of critical and non-critical data is just there accessible with almost no stopping.

Some organizations have been building layers of access filters, such as banking and healthcare for instance, mostly for regulatory purposes. But most of such barriers were designed to prevent end-users from accessing the privileged data, leaving technology access mostly unchecked even if monitored. Companies need to rethink their entire data strategy with the clear goal to prevent and at least reduce the potential access to the core information, including breaking down the most critical data into subsets meaningless unless all parts are present in the same time. Hard encryption of some of the data with dynamic keys would also build a strong deterrence for would-be hackers. Anything that makes it more complicated, call for more resources to access meaningful data is a step in the right direction.

Getting started…

As mentioned before, cyber-crime has turned into a business. Unless you are facing ethically or politically motivated hackers, there is a numbers of threshold of cost and complexity which will eventually turn off most tentative petty hacks. If you are a “professional” hacker and are looking into a couple of hundred targets, it is very likely that you will go first and foremost after the ones that are easier to grab and break. Then you would go after the next ones, and the next ones, until you have exhausted the list or reached a point where the targets are not juicy enough to justify a continued efforts, while other sitting ducks are quacking for attention all around.

This is clearly not a good enough strategy if you are the Pentagon, the CIA, General Electric or Microsoft, as the thrill for the challenge and the “trophy” can be sufficient to trigger passionate focus. But who would get obsessed with hacking the customer list of an engineering company designing guides for industrial conveyor belts?

Companies, and for that matter individuals too, should think hard on which private information is really critical, and how they could break it down into meaningless subsets, or protect the most critical parts if separate ways. The same three prong strategy should be used in all cases:

  • Maintain the currency of cyber-fences to reduce the exposure to the outside world and make more cumbersome and costly to reach the actual first data stores,
  • Revise the data and information architecture to eliminate massive data stores, create physical and logical separation between complementary data objects; create disruptive / distracting
  • Establish dynamic monitoring and responses to emerging or identified threats, including actively participating and contributing to specialized user groups to keep current knowledge on threats and responses that work best.
  • Sign-up or create a User Group focused on cyber security, preferably which has an official endorsement or is collaborating with secure entities such as the FBI Cyber Crime task force or Working Group, including the National Cyber Awareness System to get updates on threats and response

An organization enabling its employees to store credit card information on a laptop or mobile device is no different from those publishing on Facebook that they will be vacationing in South America for the next three weeks, leaving their house empty and unprotected. These are just typical sitting ducks. Would you leave on a connected computer with a folder called “Personal Data” containing a scanned copy of your ID card, passport, social security, credit cards and codes unencrypted? Then why would you leave your customers or trade secret information open for all to see?

With the irruption of Big Data a new dimension of vulnerabilities exists that pretty much breaks some of the old walls: organizations import massive data loads form mostly unchecked sources, in order to perform analysis of patterns and correlations. The capacity of such process to generate new “exploits” for cyber-criminals is simply staggering. How many companies however have been pro-actively analyzing the very process of data mining and acquisition to create new logical fences and ensure that this data does not carry some nasty content with it? How many have been considering how competing companies could very well publish knowingly erroneous data to simply mess up the data mining process of unsuspecting data sourcing teams?

Since the actual extract and formatting of the data is the primary driver for creating value here (as data itself is readily available to all), companies should create a whole new architecture that segregates the actual external data repositories from the extracted “value added” data resulting from the process. Moreover, this data should only dynamically be combined with internal data to generate the final layer of mining and analysis, reducing the critical data availability to a time window as narrow as possible. Such data architecture would disrupt cyber criminals and make it harder to aggregate the information in a meaningful, commercially viable way.

We all have to assume that every piece of non-public data is a potential value target for criminals, and needs to be protected with a combination of fencing and disruptive data architecture and storage.

This might help jumpstart building or upgrading a comprehensive technology and information security for the enterprise. It will help get started, but will not replace being alert at all times, and creating network of security intelligence to counter the power of cyber-crime and keep up with the fast emerging new threats.

Chief Security Officers are starting to face the same issues CIOs have been dealing with in the last decade: end-users and businesses resent having to work under constraints, and it can be exhausting to always be the one to say “no”. It invariably ended with the users taking the power back and forcing CIOs to catch up scrambling; mobile computing and BYOD are good examples of failed attempts to control business users through non-collaborative techniques. It could be a good idea to learn from these experiences and start to get a more engaged, positive and dynamic approach to Enterprise Security.

These strategies are foundations for a new thinking of corporate and individual cyber-security. After years of leaving in fear like sitting ducks, it is time that we take back the initiative in fighting cyber-crime with the tools of this century.

Advanced benchmarking for strategic leaders

Image

Numbers can tell many stories. Build yours

Benchmarks moved in a few decades from a specialty niche hosting rain makers and rare experts to a broadly used tool available online, with many competing for the same business. Little exist in terms of standards and tools however to fully leverage the value of the reports and analysis.

On the customer side, everybody is now using benchmark values a way or another, picking bits and pieces of knowledge to form an opinion. The inevitable bias creates a new version of truth and can overcome deeper analysis with greater formatting.

Technology benchmarks are sometimes based on a recent quarter’s data. Some collect thousands of inputs, while others are processing a mere couple of hundred records. Resetting benchmark data from the perspectives of time (how adequate is the data gathering period) and relevance of the sample group (how similar are the companies or businesses being surveyed) is a necessary process to make the best use of the intelligence gathered.

Like any tool, the value of a benchmark is in how it is being used. A composite competitive or comparative benchmark index can provide targeted information to match strategic endeavors: competitive performance and how the peer group responds to market changes; specific competitor’s numbers and how they compare; how much a strategic initiative or corporate program achieves business value over time. For each application, a specific composite panel can be built and monitored.

The need for perspective… and a bit of math!

Whether the benchmark is the result on internal collection or a purchased product, the results should always be placed into a fresh, skeptical perspective, to make sure that the relevance and accuracy of the data is going to match the needs. Two dimensions: period of reference and the proportion (breadth & Depth) of the sample base, should always be used to validate how much you can rely on the calculated numbers to make decisions. Using a sample base of at least 400 independent points can provide a 95% confidence level, and a period of reference of at least the same duration as your decision / analysis covers are safeguards. They might not ensure that you have the right answer, but might help you avoid the biggest mistakes. A reduction of the sample data size is not always a source of error, as long as the sample size remains within acceptable confidence level (10% seems a good threshold), and that the period is not smaller than the target analysis.

Image

Composite benchmarks that fit your needs

The comparative analysis of the performance of a business with its peers is a precious tool to know when and where one is doing better than others, which by itself provides both a goal to achieve and an understanding of the competitive differentiator. A limited number of performance indicators are in general providing a solid perspective of the overall competitive performance: revenue, net profit, turnover, COGS, EBITDA, Gross Margin, SG&A, etc. For each business, a specific set of Key Performance Indicators are meaningful, such as the Underwriting Cycle Time or Abandonment Rate in the Insurance industry. Once a set of indicators is created, the central value should be the industry average for a given industry or sub-segment; this is the baseline.

Three additional plotted points can immediately increase the intelligence:

  1. The best performer (s) in the industry,
  2. The worst performer(s) in the industry and
  3. The business doing the analysis.

The following graph illustrates how such a composite benchmark can provide deep intelligence on a segment of the industry, how it reacts to changes to economic conditions or seasonal changes. A simple observation of the changes to the Quick Ratio and Current Ratio in a low-season quarter for example, could give a direct read on the Inventory burden and how each company in the selected group is dealing with turnover.

Image Continue reading