Skip to main content Skip to main navigation

Publikation

Combining AI and AM - Improving approximate matching through transformer networks

Frieder Uhlig; Lukas Struppek; Dominik Hintersdorf; Thomas Göbel; Harald Baier; Kristian Kersting
In: Digital Investigation, Vol. 45, No. Supplement, Pages 1-13, arXiv, 2023.

Zusammenfassung

Approximate matching is a well-known concept in digi- tal forensics to determine the similarity between digital arti- facts. An important use case of approximate matching is the reliable and efficient detection of case-relevant data struc- tures on a blacklist (e.g., malware or corporate secrets), if only fragments of the original are available. For instance, if only a cluster of indexed malware is still present during the digital forensic investigation, the approximate matching algorithm shall be able to assign the fragment to the black- listed malware. However, traditional approximate match- ing functions like TLSH and ssdeep fail to detect files based on their fragments if the presented piece is relatively small compared to the overall file size (e.g., like one-third of the total file). A second well-known issue with tradi- tional approximate matching algorithms is the lack of scal- ing due to the ever-increasing lookup databases. In this pa- per, we propose an improved matching algorithm based on transformer-based models from the field of natural language processing. We call our approach Deep Learning Approxi- mate Matching (DLAM). As a concept from artificial intelli- gence, DLAM gets knowledge of characteristic blacklisted patterns during its training phase. Then DLAM is able to detect the patterns in a typically much larger file, that is DLAM focuses on the use case of fragment detection. Our evaluation is inspired by two widespread blacklist use cases: the detection of malware (e.g., in JavaScript) and corpo- rate secrets (e.g., pdf or office documents). We reveal that DLAM has three key advantages compared to the promi- *Equal contribution Published at DFRWS USA 2023 as a conference paper. nent conventional approaches TLSH and ssdeep. First, it makes the tedious extraction of known to be bad parts obsolete, which is necessary until now before any search for them with approximate matching algorithms. This al- lows efficient classification of files on a much larger scale, which is important due to exponentially increasing data to be investigated. Second, depending on the use case, DLAM achieves a similar (in case of mrsh-cf and mrsh-v2) or even significantly higher accuracy (in case of ssdeep and TLSH) in recovering fragments of blacklisted files. For in- stance, in the case of JavaScript files, our assessment shows that DLAM provides an accuracy of 93% on our test cor- pus, while TLSH and ssdeep show a classification ac- curacy of only 50%. Third, we show that DLAM enables the detection of file correlations in the output of TLSH and ssdeep even for fragment sizes, where the respective matching function of TLSH and ssdeep fails.

Weitere Links