AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Program models used for model-based intrusion detection can benefit from our new analyses

Environment-sensitive intrusion detection

RAID, (2006): 185-206

Cited by: 87|Views46
EI

Abstract

We perform host-based intrusion detection by constructing a model from a program's binary code and then restricting the program's execution by the model. We improve the effectiveness of such model-based intrusion detection systems by incorporating into the model knowledge of the environment in which the program runs, and by increasing the...More

Code:

Data:

0
Introduction
  • A host-based intrusion detection system (HIDS) monitors a process’ execution to identify potentially malicious behavior.
  • In a model-based anomaly HIDS or behavior-based HIDS [3], deviations from a precomputed model of expected behavior indicate possible intrusion attempts.
  • An execution monitor verifies a stream of events, often system calls, generated by the executing process.
  • The monitor rejects event streams deviating from the model.
  • Previous statically constructed models allowed execution behaviors possible in any execution environment.
  • The environment can significantly constrain a process’ execution, disabling entire blocks of functionality and restricting the process’ access
Highlights
  • A host-based intrusion detection system (HIDS) monitors a process’ execution to identify potentially malicious behavior
  • Processes often read the environment—configuration files, command-line parameters, and environment variables known at process load time and fixed for the entire execution of the process
  • This paper aims to demonstrate the value of environment-sensitive intrusion detection and does not yet consider the problem of automated dependency identification
  • We evaluated the precision of environment-sensitive program models using average reachability
  • Program models used for model-based intrusion detection can benefit from our new analyses
  • Adding environment sensitivity continues to strengthen program models by adding environment features to the models
Results
  • The authors evaluated the precision of environment-sensitive program models using average reachability.
  • A precise model closely represents the program for which it was constructed and offers an adversary little ability to execute attacks undetected.
  • Models utilizing environment sensitivity and the argument analysis should show improvement over the previous best techniques [5, 10].
  • The authors' static argument recovery improved precision by 61%–100%.
  • Adding environment sensitivity to the models increased the gains to 76%–100%.
  • The authors end by arguing that model-based
Conclusion
  • Program models used for model-based intrusion detection can benefit from the new analyses.
  • The authors' static argument recovery reduces attack opportunities significantly further than prior argument analysis approaches.
  • Adding environment sensitivity continues to strengthen program models by adding environment features to the models.
  • The usefulness of these model-construction techniques is shown in the results, where the models could severely constrain several test programs’ execution
Tables
  • Table1: Test programs, workloads, and instruction counts. Instruction counts include instructions from any shared objects used by the program
  • Table2: Environment dependencies in our test programs. We manually identified the dependencies via inspection of source code and object code
  • Table3: Performance overheads due to execution enforcement using environment-sensitive models. Model update is the one-time cost of pruning from the model execution paths not allowed in the current environment. The enforcement times include both program execution and verification of each system call executed against the program’s model
Download tables as Excel
Related work
  • In 1994, Fix and Schneider added execution environment information to a programming logic to make program specifications more precise [7]. The logic better specified how a program would execute, allowing for more precise analysis of the program in a proof system. Their notion of environment was general, including properties such as scheduler behavior. We are proposing a similar idea: use environment information to more precisely characterize expected program behavior in a program model. As our models describe safety properties that must not be violated, we focus on environment aspects that can constrain the safety properties.

    Chinchani et al instrumented C source-code with security checks based upon environment information [1]. Their definition of environment primarily encompassed lowlevel properties of the physical machine on which a process executes. For example, knowing the number of bits per integer allowed the authors to insert code into a program to prevent integer overflows. This approach is specific to known exploit vectors and requires source-code editing, making it poorly suited for our environment-sensitive intrusion detection.
Funding
  • Giffin was partially supported by a Cisco Systems Distinguished Graduate Fellowship
  • Somesh Jha was partially supported by NSF Career grant CNS-0448476
  • This work was supported in part by Office of Naval Research grant N00014-01-1-0708 and NSF grant CCR-0133629
Reference
  • R. Chinchani, A. Iyer, B. Jayaraman, and S. Upadhyaya. ARCHERR: Runtime environment driven program safety. In 9th European Symposium on Research in Computer Security, Sophia Antipolis, France, Sept. 2004.
    Google ScholarLocate open access versionFindings
  • E. M. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-guided abstraction refinement. In Computer Aided Verification, Chicago, IL, July 2000.
    Google ScholarLocate open access versionFindings
  • H. Debar, M. Dacier, and A. Wespi. Towards a taxonomy of intrusion-detection systems. Computer Networks, 31:805–822, 1999.
    Google ScholarLocate open access versionFindings
  • J. Esparza, D. Hansel, P. Rossmanith, and S. Schwoon. Efficient algorithms for model checking pushdown systems. In Computer Aided Verification, Chicago, IL, July 2000.
    Google ScholarLocate open access versionFindings
  • H. H. Feng, J. T. Giffin, Y. Huang, S. Jha, W. Lee, and B. P. Miller. Formalizing sensitivity in static analysis for intrusion detection. In IEEE Symposium on Security and Privacy, Oakland, CA, May 2004.
    Google ScholarLocate open access versionFindings
  • H. H. Feng, O. M. Kolesnikov, P. Fogla, W. Lee, and W. Gong. Anomaly detection using call stack information. In IEEE Symposium on Security and Privacy, Oakland, CA, May 2003.
    Google ScholarLocate open access versionFindings
  • L. Fix and F. B. Schneider. Reasoning about programs by exploiting the environment. In 21st International Colloquium on Automata, Languages, and Programming, Jerusalem, Israel, July 1994.
    Google ScholarLocate open access versionFindings
  • D. Gao, M. K. Reiter, and D. Song. On gray-box program tracking for anomaly detection. In 13th USENIX Security Symposium, San Diego, CA, Aug. 2004.
    Google ScholarLocate open access versionFindings
  • J. T. Giffin, S. Jha, and B. P. Miller. Detecting manipulated remote call streams. In 11th USENIX Security Symposium, San Francisco, CA, Aug. 2002.
    Google ScholarLocate open access versionFindings
  • J. T. Giffin, S. Jha, and B. P. Miller. Efficient context-sensitive intrusion detection. In 11th Network and Distributed Systems Security Symposium, San Diego, CA, Feb. 2004.
    Google ScholarLocate open access versionFindings
  • httpd. Solaris manual pages, chapter 8, Feb.1997.
    Google ScholarFindings
  • J. Koziol, D. Litchfield, D. Aitel, C. Anley, S. Eren, N. Mehta, and R. Hassell. The Shellcoder’s Handbook: Discovering and Exploiting Security Holes. Wiley, 2003.
    Google ScholarFindings
  • C. Kruegel, D. Mutz, F. Valeur, and G. Vigna. On the detection of anomalous system call arguments. In 8th European Symposium on Research in Computer Security, pages 326–343, Gjøvik, Norway, Oct. 2003.
    Google ScholarLocate open access versionFindings
  • L.-c. Lam and T.-c. Chiueh. Automatic extraction of accurate application-specific sandboxing policy. In Recent Advances in Intrusion Detection, Sophia Antipolis, France, Sept. 2004.
    Google ScholarLocate open access versionFindings
  • S. S. Muchnick. Advanced Compiler Design and Implementation. Morgan Kaufmann Publishers, San Francisco, CA, 1997.
    Google ScholarFindings
  • R. Sekar, V. N. Venkatakrishnan, S. Basu, S. Bhatkar, and D. C. DuVarney. Model-carrying code: A practical approach for safe execution of untrusted applications. In ACM Symposium on Operating System Principles, Bolton Landing, NY, Oct. 2003.
    Google ScholarLocate open access versionFindings
  • M. Sharir and A. Pnueli. Two approaches to interprocedural data flow analysis. In S. S. Muchnick and N. D. Jones, editors, Program Flow Analysis: Theory and Applications, chapter 7, pages 189–233. Prentice-Hall, 1981.
    Google ScholarLocate open access versionFindings
  • K. Tan, J. McHugh, and K. Killourhy. Hiding intrusions: From the abnormal to the normal and beyond. In 5th International Workshop on Information Hiding, Noordwijkerhout, Netherlands, October 2002.
    Google ScholarLocate open access versionFindings
  • U.S. Department of Energy Computer Incident Advisory Capability. M-026: OpenSSH uselogin privilege elevation vulnerability, Dec. 2001.
    Google ScholarFindings
  • D. Wagner and D. Dean. Intrusion detection via static analysis. In IEEE Symposium on Security and Privacy, Oakland, CA, May 2001.
    Google ScholarLocate open access versionFindings
  • D. Wagner and P. Soto. Mimicry attacks on host based intrusion detection systems. In 9th ACM Conference on Computer and Communications Security, Washington, DC, Nov. 2002.
    Google ScholarLocate open access versionFindings
  • D. A. Wagner. Static Analysis and Computer Security: New Techniques for Software Assurance. PhD dissertation, University of California at Berkeley, Fall 2000.
    Google ScholarFindings
  • M. Yannakakis. Graph-theoretic methods in database theory. In ACM Symposium on Principles of Database Systems, Nashville, TN, Apr. 1990.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科