Introducing Vulnerability Research Feeds

February 10th, 2021 – Alisa Esage

Key insights
  • Strategically effective software vulnerability discovery and security hardening in just one modern system requires a systematic deep technical and practical knowledge about an entire class of systems.
  • As technology development accelerates, systematical knowledge is becoming less of a constant and more like a dynamic stream of new information that has to be uncovered and processed continuously.
  • Leveraging mechanically scaled labour to obtain deeply connected knowledge across several research targets is suboptimal.
  • Requirements for a single-source systematical vunerability research knowledge feed focused on a class of systems.
Background

For the past 10 years I have been researching zero-day vulnerabilities, and vulnerability discovery as an interplay of cognitive and applied processes, in a variety of modern systems. Around 2014-2016 I was focusing on web browsers and, as a matter of personal specifics, took the less common path of investigating multiple implementations at once as opposed to targeting just one product. Because of such an unpopular parallelization I quickly noticed a trend: security bugs that affect systems of a same class tend to be very similar, to the extent that a slightly modified proof-of-concept testcase for a bug in implementation X can hit a previously unknown bug in completely different code of implementation Y right away.

The trend "different code, same bug patterns" is especially pronounced in case of (sub)systems based on well-defined public specifications (such as JavaScript engines), as opposed to systems based on abstract functional requirements ideas and opportunistic coding (such as web browser engines in general). Moreover, attack vectors and vulnerability classes exemplified by security issues in one implementation (usually an open source one) tend to directly map out insecurity patterns in another, proprietary implementation for which direct threat modeling is hard. As an example, it is often possible to find previously unknown vulnerabilities by simply looking at some open source security patches, and translating the general idea to more hardened proprietary systems in the same class. As such, deep security research knowledge specialized on one software system is directly relevant, and essential, to all other systems of the same class.

The problem is that modern software systems are complex, and each implementation is a separate world worthy of many years of devoted introspection. This is the reason why majority of advanced vulnerability researchers tend to specialize on just on implementation, such as one specific browser or one hypervisor product, occasionally moving on to another one after some years of narrow-focus research. Acquisition of systematical knowledge based on multiple implementations takes many years, and is barely incentivized on mainstream career paths.

The knowledge

So, how do you obtain such a deep systematic knowledge about an entire class of systems? For the purposes of a typical industrial set up, there exists a linear spectre of opportunities that can be reduced to two extreme points.

Option 1: hire a team of researchers, with each one specialized on one specific implementation. Motivate them to collaborate and share knowledge.
Option 2: hire one highly experienced multi-implementation expert in the role of a Research Director, and maybe some interns.

Option 1 is impractical in many cases when the de-facto research target is just one system, and thus the work of majority of the team would fail to generate value directly; and suboptimal in all other cases due to contraints of human communication. When knowledge is stored in a single brain, analytic correlations and deep insights are established automatically. For a team, deep knowledge correlation represents a standalone challenge of management which is hard to solve completely. In the real world, team-based research usually only enables a certain quantitative scaling, and rarely produces the kind of deep systematical knowledge that would enable a solid competitive advantage and valuable insights.

Option 2 solves the issues of inter-human cognition, but it's not realistic due to scarcity of the talent required. Whilst multi-implementation specialists occasionally do evolve in nature, such a capability is founded on a peculiar mindset which is poorly compatible with employment scenarios. The result is that this kind of specialists tend to work independently or run their own business, and typically are not interested in doing consulting jobs or otherwise sharing the knowledge directly. Deep knowledge is power and money.

The concept

Basic idea of Vulnerability Research Intelligence feeds: offer the knowledge of scope as in Option 1, with the quality as in Option 2, at the cost of just one median salary.

Any optimization comes at a price. In this case the price is a loss of perceived exclusivity, as such knowledge feeds will be available to multiple consumers. Why "perceived"? Because in reality hiring an employee or a team, or even a presumably advanced subcontractor under an NDA does not guarantee you an exclusive research output. Researchers around the world tend to walk the same paths, especially beginners, which in the industry of advanced vulnerability research is greatly facilitated by virtually nonexistent information sharing. And most importantly: seeing the previously unseen opportunities requires both experience and awareness, and a lot of it.

Primary content of the feeds shall be similar to that of my trainings in the spirit and organization. It would include theoretical essentials, implementation system internals, patched vulnerability analyses, tooling and threat modeling considerations – but with a specific focus on the evolving state of the art (such as new bugs and new technologies), and the deep research vectors that I find interesting for strategic vulnerability discovery. This is the kind of neutral raw intelligence which is an essential input to both offensive and defensive workflows – just like public blogs on vulnerability analysis.

Ethical neutrality is a principle. The type of intelligence that I aim to create should be available and valuable to all interested parties, with the exception of criminal entities. It will be valuable and available to major software and hardware vendors, as well as boutique security firms.

How does it work?

Supposing that you are already doing (or planning to start) a certain targeted software security research project for either offensive purposes (find previously unknown security vulnerabilitites and develop good exploits) or defensive purposes (write secure code, or a specialized defensive software): there is a continuous influx of new information that has to be factored in the workflow for its output to stay relevant and competetive in the long run, and effective immediately. In some cases such information is publicly available, such as blog posts and conference presentations from researchers working in the same field. In other cases it is not readily available or not obvious at all, and may require an advanced effort based on specialized skills – such as insecurity patterns buried in closed-source binary security patches, or prototyping hard vulnerabilities.

A vulnerability research feed covering a class of systems – such as Hypervisors, Web Browsers, or Basebands – should consist of the following general categories of knowledge streams:

  • Analysis of security patches.

Vulnerability patterns and attack vectors disclosed by security patches is the single most important bit of knowledge for ongoing vulnerability research or software hardening. As of today, such information rarely makes it to public blogs, as offensive security researchers and firms recognize its strategic value with respect to "high value targets" (highly popular, widely deployed and mission-critical systems).

  • Code base updates and new features.

New or changed code introduced in software means new security issues, new opportunities for exploit engineering, and new competitive edges for defensive developers. Especially newly added software functionalitites tend to contain a certain amount of shallow security bugs that are short-lived. As technology accelerates, new features and major new code is introduced faster than ever, and presents a separate challenge to stay on top of.

  • Deep and universal system internals.

As opposed to new high-level software features that may be added or changed daily, certain parts of any large code base basically never change. Subsystems such as memory management, internal state management, custom operating system API, and various core interfaces represent a timeless layer of strategically valuable knowledge that is commonly leveraged for exploit primitives and long-lived exploits, and thus deserves an ongoing research effort.

  • Threat models, attack surfaces and attack vectors.

In the exponentially accelerating technological landscape threat models are no longer static abstractions that can be defined in a text book once and for all. Any new major software feature adds a new attack surface to the threat model, with some dozen of distinct attack vectors that can be targeted with fuzzing or static analysis procedures. A less obvious side of the same trend: it is not uncommon that a change of code introduces a major paradigm shift which enables new security issues on top of the same old codebase, or new security risks become relevant in general. As such, practical attack models represent a crucial part of the dynamically changing research knowledge that has to be updated systematically.

  • Potentially interesting attack surfaces.

Identifying, reverse-engineering and attack-prototyping various non-obvious attack surfaces is key to long-term strategic advantage in software vulnerability research. In any real-world complex system only a small subset of attack surfaces is publicly known, and another subset (which is larger, but still incomplete) may be modeled with a dedicated effort. It's the exclusive capability of a highly skilled and insightful human mind to uncover those attack vectors that are totally unexpected, and thus present highly valuable offensive opportunities.

Anything else?

The directions outlined in the previous section represent core research activities that are required in every practical scenario involving software security engineering. Aside from that, any industrial set up heavily depends on a set of somewhat more mundane procedures:

  • Regular specialized training of work force.
  • Following and integrating new public knowledge.
  • Inspiration, daily research tips, and community discussions; among other things.

These procedures are crucial for creative technical R&D workflows, and hard to solve with nonsense-infused streams of social networks.

Keeping up with public knowledge becomes more challenging as software industries mature and more publications become available (as in case of hypervisor security research, for instance), so that even basic navigation in the unstructured sea of public information requires quite a bit of a specialized and informed effort.

The research feeds should cover these procedures to some extent: by offering prebooked seats in trainings, access to private discussion channels, and a curated stream of technical news.

Conclusions

To sum it up, my general vision for Vulnerability Research Feeds: deeply insightful, strategically essential, and immediately actionable streams of dynamic knowledge for all offensive and defensive code security workflows, empowering security engineers and directors to focus on what's most important: innovation, cutting edge competitive developments, and actual advanced vulnerability research which then, with all that systematical and up-to-date knowledge, would be easy to "just work".

The details are in discussions with potential partners that you are welcome to join.

Follow us on Twitter and Telegram to get updates.

Categories: Products, Intelligence, Vulnerability Research

Previous Post Next Post