09 May 2017
Foreword
This post is the result of some thinking about reverse engineering tools. I
have been reverse engineering for more than 15 years but it has been only
very recently that I have begun feeling disappointed by the current tools.
Of course, I am not the first, and as Halvar said: “I am regularly infuriated
about the state of reverse engineering tools, and have only myself to blame.”
source
That being said, as most of the reverse work I do is static reversing on
“exotic” platforms or operating systems, your perception may quite differ,
particularly if your focus is automated analysis, which is not my case. As I
almost exclusively reverse interactively, I think it’s very important for
the tools to be easily integrated in the analyst’s workflow: some tools are
really awesome but only usable for automated analysis.
And of course, this is my own ranting :)
Reverse engineering techniques
10 years ago most tools were pure disassemblers with nonexistant to poor advanced
static analysis capabilities. But recently, techniques for binary code analysis
have improved greatly and are getting practical. I will cover them quickly,
describing how I understand them and how they can be useful.
Static analysis techniques
I will not discuss here the merits of symbolic execution, abstract
interpretation or any other technique, as my point is about the practical tools
available to the reverser. Which underlying technique is or could be used is
out of scope.
Type propagation and reconstruction
Type propagation is quite simple to understand: knowing some types, either from
external APIs, FLIRT or from the analyst, use data flow analysis to propagate
types to arguments and relevant data. The challenge here is to do it both ways:
- forward, for example with argument types inside in a function, or return values
- backward, when calling an function with known argument types.
IDA has been doing it, in a limited way, for a long time (more than 10 years).
Type reconstruction is more advanced: using both type propagation and access
patterns, reconstruct complex types such as structures or vtables.
The only two practical tools that I know of are:
both using HexRays’s decompiler SDK to analyse the decompiled output and create
the advanced structures.
Note that this is a research topic with several academic papers covering the
subject, but I don’t know of any tool with IDA integration.
Also, interesting approaches have been proposed for dynamic analysis, for
example Trace Surfing by A. Gianni.
Taint analysis
Taint analysis is also very useful for the reverser as it can help pinpoint
interesting parts of a binary or function, depending on the source of taint.
Ponce is very interesting as it uses Triton to provide taint analysis directly
in IDA, with an easy to use GUI. I think it is a good way to provide advanced
analytics, too bad it is limited to dynamic analysis.
Data slicing
Data slicing could be described as a kind of backward taint analysis, where
the goal is to find which instructions and data inputs are used for a given
resulting register or memory space.
miasm’s
blog
gives a very good example and a practical tool ;)
Decompilation
Of course the holy grail of reversers is a good decompiler, which is currently
Hex-Rays.
Some academic papers such as this
one
claim interesting results, but are not available.
Lots of very interesting tools have appeared in the last years, covering part
of the techniques I mentioned before. For example:
While they all provide powerful features, they certainly do not cover the use
case I covered in my introduction. Most of them could be (and have been) used
as external helpers to add output to IDA but they do not provide a platform to
build interactive tools upon.
Disassemblers
In addition to the previous tools, the two main challengers to IDA are:
While I did not try them extensively (Relyze does not work on Linux, Binary
Ninja was slow when I tried the beta version), they look promising.
In particular, Binary Ninja’s IL and API seem to cover much of the points I
will cover in the next section. Be a Binary
Rockstar by Sophia
d’Antoine, Peter LaFosse and Rusty Wagner is a good showcase.
IDA
IDA is, like it or not, the only real tool you can use for serious reverse
engineering work, particularly on exotic platforms.
Why is IDA still reigning ?
Based on the features I outlined before, IDA is not really good on most of
them. So why is it still the default RE tool ?
For several reasons:
- its GUI works, really, sometimes it’s painful but it works :)
- it supports so many architectures that it’s very rare to have something not
supported.
- its plugin system allows to extend it and compensate for missing features (if
one bears the pain of using the SDK).
- its included library of information: type infos and flirt signatures.
- its very reactive and knowledgable support.
The decompiler is of course a killer feature for efficient reverse engineering,
particularly of C code.
Missing features
While this part may seem like throwing stones at IDA, I really think it’s a
great tool. Read it has an extended wishlist :)
Collaborative work
Clearly, one missing, essential, feature of IDA is the ability to do
collaborative work. One just needs to look at the various attempts to create
plugins attempting to fix the problem:
collabREate,
SolIDArity, polichombr,
YaCO, etc.
One basic aspect of a collaboration feature would be to be able to
simultaneously work on the same IDB, synchronising information like a git
repository.
But, while a life changing enhancement, that wouldn’t be enough. Hopefully, one
could also share structures through a server. For example, someone working on a
client and server could share the structures for the protocol while working on a
different binary.
Also, several attempts have been made over the years to create plugins to
integrate analyst’s knowledge in IDA, by recognizing functions already reversed
in the past. polichombr, crowdRE, IDA toolbag, etc.
The common point with (almost) all those tools is that they die slowly as their
authors move on to other things. Which is definetly not helped by IDA’s
internals which are not suited for such low level integration.
Multiple files handling
Another painful aspect of IDA is the inability to work on several files at the same
time. One trivial example is a binary that uses a shared library. One has to
switch all the time between two IDB, copy pasting info (typing for example) to
“synchronise” information.
This gets particularly painful when working on a more than 2 binaries at the
same time.
Semantics
This is where IDA lags behind most other recent tools: instructions semantics
and intermediate representation.
Currently the only way to search for instructions is syntaxic, which is
definitely not enough if we need to search for a changing pattern. A trivial
example is argument lookup for functions parameters, which is basically
impossible.
Having an IR would also help tremendously writing scripts independently of the
underlying architecture. Some would argue that the Hex-Rays decompiler provides
such IR, but it is expensive and, most importantly, it is quite often wrong.
Others
Some other points:
- the SDK is a pain, inconsistenly named, poorly documented, with only partial
Python support. But it is powerful.
- C++ support is nonexistent.
- Porting information (typing, names) from one IDB to another can be painful.
Future ?
warning: personal feelings here
I think one of the main reasons IDA has not evolved much in 15 years is because
there was simply no competition. The market was a niche but it feels like more
and more people are doing RE, expanding the market somehow.
Considering that HexRays made several millions of euros of result in the past
years for 5-6 full time employees, I am surprised that they did not start a new
project to replace the definitely outdated base that is the IDA core.
“Just” porting the app to 64 bits seems a major pain. So with all their
experience, their market share and money, I think Hex-Rays could start IDA-ng
from scratch, and be very successful ! :)
Hopefully, the appearance of real competition like Binary Ninja or Relyze may
stir the field a bit and force Hex-Rays to fix the fundamental problems :)
21 May 2015
This post is mainly for reference but it can be useful.
Being tired of copy/pasting my IDA config and plugins after each update, I
decided to check what I could do to centralize my config.
As I’m running Linux, I expect everything to be configurable from ~/.idapro
.
ida.cfg
You can override configuration options for IDA in ~/.idapro/idauser.cfg
.
For example, the classic :
#define DEMNAM_CMNT 0 // comments
#define DEMNAM_NAME 1 // regular names
#define DEMNAM_NONE 2 // don't display
DemangleNames = DEMNAM_NAME // Show demangled names as comments
IDAPython
IDApython will load ~/.idapro/idapythonrc.py
which can then be used to
specify additionnal paths to python, for example :
import sys
sys.path.append('/home/raph/.idapro/python')
You can now add Python libraries that will be available in all your IDA versions.
For example :
python/
├── miasm2 -> /home/raph/bin/python/lib/python2.7/site-packages/miasm2
└── pyparsing.py -> /usr/lib/python2.7/dist-packages/pyparsing.py
Plugins
Unfortunately there’s no easy way right now to handle a custom user directory
for plugins.
While discussing the issue with Ilfak, he offered the following workaround:
User plugins can be handled the following way: defined IDAPLG envvar
that points to the user plugins directory. Create symlinks to all IDA
plugins from this directory.
Which is not exactly the same but may help if the user has no write access
to the IDA directory.
Maybe a future version will offer this feature :)
Update : IDA 6.9 supports the IDAUSR
environment variable. See doc for
awesomness :)