Read Reverse Deception: Organized Cyber Threat Counter-Exploitation Online
Authors: Sean Bodmer
Tags: #General, #security, #Computers
The effort required to gather, process, distribute, and use intelligence must be related to the degree that hostile activity does or may interfere with the operation of protected networks.
To do so requires that defenders gather a good deal of information about those who are trying to penetrate or disable their networks: who they are, how they work, what they have to work with, how well they do their work, where they get their information, and so on. The list of questions CI is concerned with is long and detailed, as is the list for PI. When these lists are formalized for the purpose of managing intelligence gatherers, they are called essential elements of information (EEI).
Only the most primitive of threats to networks have no explicit EEI. Whether explicit or implied, adversary EEIs are targets of intelligence interest because such lists can be analyzed to divine what adversaries know or are still looking for, and, by implication, what they are trying to protect or intend to do.
The discussion thus far has brought us to a major conundrum at the center of the subject of this book. From whom are we defending networks and from what? On the one hand, defenders know pretty well what technical techniques are used to penetrate networks. That is constrained by the nature of the technology that others may know as well as we do. On the other hand, we have only very general ideas about who is behind attacks because the potential cast of characters is huge, ranging from precocious schoolboys to major foreign governments and organized crime. The ephemeral nature of networks, even protected networks, and their content does not help focus.
And yet, computer networks are created and operated by human beings for human purposes. At the center of it all are human beings with all their foibles and vulnerabilities. The better those are understood on both the PI and CI sides, the more effective defenses and defensive deceptions may be.
Defenders are hungry for data. The more data they have about the nature of networks, their contents, and the humans who operate and maintain them, the better the networks can be defended. But where do potential deceivers get the needed information? The technology is a large part of the answer. Networks must be stable and predictable. To get information out of a network, an adversary must be able to use the protocols designed into the network, or he must gain the cooperation of someone who can provide, so to speak, the keys to it.
Intelligence and deception are like the chicken and egg.
As a process, intelligence requires prioritization. Gathering intelligence involves defining which elements of information are required and how the means to collect them are to be allocated. This is always done under conditions of uncertainty, because that which makes items of information attractive to collectors makes them attractive to defenders.
Deception inevitably becomes involved as collector and defender vie. That which must exist yet must remain proprietary begs to be disguised, covered up, or surrounded with distractions. So, in addition to the primary information being sought, the intelligence collector needs to gather information about the opposing intelligence. What does he know about what you want to know? How does he conceal, disguise, and distract? The hall of mirrors is an apt metaphor for the endless process of intelligence gathering, denying, and deceiving.
Yet the intelligence process must have limits. There are practical constraints on time and budget. If the process is to have any value at all, conclusions must be reached and actions taken before surprises are sprung or disaster befalls. A product needs to be produced summarizing what is known as a basis for deciding what to do—whether to attempt to deceive, compromise, or destroy a hostile network.
Clearly, transactions in the chaotic, highly technical, and valuable world of computer networks require a high degree of intelligence on the part of the people involved in designing and executing them. But for effective defense, this executive or administrative intelligence is not sufficient. Another kind of intelligence is needed for effective defense.
A manipulative intelligence is needed. This is an understanding of how the various interests that created the network relate to each other. How are those who seek unauthorized access motivated? What advantage do they seek by their access? What means are available to them to determine their technical approaches or their persistence? How could this knowledge be deployed to manipulate the behavior of those seeking harmful access to our protected networks?
The main and obvious means of defending databases and networks are analogs to physical means: walls, guarded gates, passwords, and trusted people. These are as effective in the cyber world as in the physical one—which is to say, rather effective against people inclined to respect them. The old saying that “locks only keep honest people honest” applies here.
Unfortunately, the laws of supply and demand also apply. The more defense-worthy the good, the more effort a competent thief is likely to mobilize to acquire it. A highly motivated thief has been able to penetrate very high walls, get through multiple guarded gates, acquire passwords, and suborn even highly trusted people.
Passive protections do not suffice. Active measures are required. Intelligence is required whatever the answer, and deception requires intelligence—both the product and the trait.
What Constraints Apply
The first constraint is the nature of the computer and the network. To work, they must have clear and consistent rules. In order to work across space and time, the rules must be stable and be widely known and reliably implemented.
At the same time, networks have many interconnections and many rules. It is hard to know what the permutations of all of them may be in terms of allowing access to unauthorized persons. More to the point, networks involve many people who use the content in their work, as well as network administrative and support personnel. All these people—whether accidentally or deliberately—are potential sources of substantive and technical leaks.
If the unauthorized seeker of protected information is necessarily also a deceiver, and if efforts to defend that information necessarily involve deception, an ethical question moves to the center of our concern. How can we actively deceive unknown adversaries in cyberspace without damaging innocent third parties? Deceptive information offered to bait an intruder might be found and used by an innocent party. Such a person might be duped into committing unauthorized acts. Or acts induced by defense-intended deceptive information might cause unanticipated damage to networks or persons far from an intended adversary.
So we must discriminate among adversaries: between the serious criminal and the experimenting teenager, between the hobby hacker curious to see what she can do and the hacker for hire, and so on. To do so requires an intelligence program—a serious, continuing effort to gather information on those attempting to intrude on protected networks. The object of the effort is to allocate defensive resources and responses proportionate to the potential damage or expense of compromise.
If there is anything we can know about deception it is that unintended consequences will follow. How deception is done is not a matter of choosing a technique or a technology. These are limited only by the creativity of the adversary and defender, and the skill with which they deploy their technologies. It is a matter of judgment and design, and the extent to which they each accept responsibility for the consequences of their actions. There is almost certain to be an asymmetrical relationship between the sense of responsibility on each side. One side will be constrained by legal and ethical responsibilities; the other side will not.
In the largest frame, the purpose of deception is to reduce the level of uncertainty that accompanies any transaction between a computer security system and an intruder to the advantage of the defender.
When Failure Happens
Efforts to deceive are likely to fail eventually. By definition, deceptions culminate in an exploitation that typically will be to the disadvantage of the adversary. It would be a dim opponent who did not realize that something had happened. But carnival knock-over-the-bottles games thrive on keeping people paying for the next sure-to-win throw.
The D-Day deceptions of World War II were highly successful but were in the process of failing by the time of the actual invasion, because the object of the deceptions was to cover the date of an event that was certain to occur sooner or later. When the German observers looked out at dawn on June 6, 1944, and saw a sea covered by ships headed in their direction, the deception was over.
10
Deception may fail for many reasons along the chain of events that must occur from original idea to conclusion. Every operation may fail due to the quality of the effort, the forces of nature, the laws of probability, or other reasons. Here are just a few possible reasons for failure:
We may fail to get our story to the adversary, thereby failing to influence him.
11
The adversary may misinterpret the information we provide him, thereby behaving in ways we may be unprepared to exploit.
We may fail to anticipate all the adversary’s potential responses.