What command would reveal the most information about groups that user named Bob belongs to?

Question 1: What command (include the full syntax) would you use to change access control for the group to read, write and execute.

chmod g+rwx <files/directory>

Explanation: This command gives access to everyone in the same group as the file to owner to read, write and execute

Question 2: How would you grant full access only to Test user1 and test user2 to the directory students?

This can be done in 4 steps

  1. 1.Create a new group named teststudents

sudo groupadd teststudents

  1. 2.Add the users testuser1 and testuser2 to teststudents

sudo adduser testuser1

sudo adduser testuser2

usermod -aG teststudents testuser1

usermod -aG teststudents testuser2

  1. 3.Change the owner of students directory to teststudents group

sudo chown testuser1:teststudents students

or

sudo chown testuser2:teststudents students

  1. 4.Change the permission to only groups

sudo chmod 070 students

Question 3: explain the effect of the command chmod 777 text4 on the file text4 (if necessary, use the man command

Explanation: chmod sets the permissions for a file based on the values we provide in the mode field

It is an octal number. Here, first field means value for users, second field means value for groups and third field means value for others

7 means full access: read, write and execute

1 means execute

4 means write

6 means read

Thus, 777 will give full access(read, write and execute) on text4 file to everyone in the system

Question 4: Examine the following extract from a shadow file:

What is the encryption algorithm used for the /etc/shadow file?

The hashfile algorithm can be determined by the starting of 2nd filed (after first colon) of the shadow file

It can be one of these:

$1: MD5 hashing algorithm

$2: Blowfish Algorithm

$3: Eksblowfish Algorithm

$4: NT hashing algorithm

$5: SHA-256 Algorithm

$6: SHA-512 Algorithm

Question 5: Refer to the same shadow file extract. What is the status of the gnome-initial-setup account?

The account has been disabled

More information can be found here:

https://www.tldp.org/LDP/lame/LAME/linux-admin-made-easy/shadow-file-formats.html

We can view the shadow file extract using this command

sudo cat /etc/shadow | grep gnome

Question 6: Refer to the same shadow file extract. When will user1 and user2 have to change passwords.

This is the output of user1 in shadow file using the above command:

testuser1:$6$8om5avuh$8X/p04ogp.MyWXXkax88XkUIFy56JSUFUb5Csvp20XIkZ6zZoL2b67w9jWJCNguFaPxpd2YOAX9laZOzsOJ0V1:18338:0:99999:7:::

This means that password must be changed (99999 indicates user can keep his or her password unchanged for many, many years) Please refer the same url for this shadow file format

Question 7: Explain how did you verify if user1 and user2 have read, write, and execute access to the directory students content.

We can simly check that others cannot access the file/folder using the below account from their account. We get permission denied

cat <somefile>

or

ls students

Then we can run the same command by logging in from testuser1 or testuser2 account and we should be able to see contents or list the directory

Another way to check would be to do

ls -la students

on the terminal which would show us the permission bits and owner and group of the directory.

Privilege Escalation & Passwords

Jaron Bradley, in OS X Incident Response, 2016

Analysis

We’ll start our analysis simply by looking at which users are allowed to access root via the sudo command. This can be done by looking in the /private/etc/sudoers file.

What command would reveal the most information about groups that user named Bob belongs to?

This is a standard OS X sudoers file. It doesn’t look like it’s been tampered with in any way. It really only consists of comments and a few settings. The most important are the entries under “User privilege” specification comment.

What command would reveal the most information about groups that user named Bob belongs to?

This tells us that anyone who belongs to the “admin” group is allowed to perform tasks using the sudo binary. Our next check is to see which users are in the admin group. We collected group info of each user during our bash calls in Chapter 3.

What command would reveal the most information about groups that user named Bob belongs to?

This snippet shows that mike, test, and root are all part of the admin group. When performing analysis on an OS X system, the odds are high that the system is a personal computer and you will only be dealing with the system owner who is also part of the admins group.

Our next step will be to see if there are any odd setuid binaries on the system. We can search the fileinfo.txt we built in Chapter 4 for this information. Inside this file we recorded setuid and setgid binaries.

What command would reveal the most information about groups that user named Bob belongs to?

The majority of the files here appear to be standard OS X setuid binaries that come preinstalled. After you’ve learned to identify the ones that are normal, you’ll notice that there is one oddball that sticks out.

/usr/bin/mac_auth, 777, file, 0, 0, 8496, SETUID

We can instantly mark this file as suspicious for two different reasons. First, it follows a familiar naming scheme. We’ve seen the attacker malware drop many different items that run under an OS X type naming theme. Secondly, this file has permissions of 777. No standard OS X setuid binaries have read, write, and execute permissions set to every class. Finally, we see that this file belongs to UID 0 and GID 0 which are both root. We can further investigate this file by looking at the file timeline.

What command would reveal the most information about groups that user named Bob belongs to?

The earliest timestamp we see for mac_auth occurs 15 min after the installation of the malware (6:55). Given the time and details on this binary, it’s more likely that this was dropped by the attacker rather than exploited. This setuid binary could be a failsafe to regain root access if the attacker was to be discovered. Let’s check to see mac_auth was active on the system.

What command would reveal the most information about groups that user named Bob belongs to?

At the time we ran our collection scripts mac_auth was not an active process. Can we find anything regarding this file in memory?

What command would reveal the most information about groups that user named Bob belongs to?

Here we see the file was dropped on the system in an unspecified location and then moved the /usr/bin/ directory. The setuid bit was then applied. We still don’t see any signs that it was executed, but we can’t be 100% sure. All of our artifacts have left us with little idea of what exactly this setuid binary does. This means our best bet is to return to the victim system and recover the file.

Moving on, now that we’ve established which users are administrators, let’s check to see if any have enabled automatic login. You can do this either by looking for kcpassword inside our collected artifacts or by searching the fileinfo.txt file again.

What command would reveal the most information about groups that user named Bob belongs to?

It looks like automatic login was not enabled because we see no kcpassword file on the system. This is good news for us since the attacker would have easily been able to recover the root password if this file existed.

Another question we have to ask ourselves is whether or not the attacker copied the keychain to a remote system. As discussed in this chapter, the attacker can take the user’s unlocked keychain (∼Library/Keychains/login.keychain) and access all the information inside of it if they can recover the user’s login password. Unfortunately, it will often be difficult to know whether or not the login.keychain file was accessed. Looking at the accessed timestamp won’t benefit us because this file is accessed on a regular basis by legitimate tools. Our best bet is trying to find it in memory, but the analyst should almost always assume that this file will be collected by the adversary.

What command would reveal the most information about groups that user named Bob belongs to?

The aforementioned output is just a short snippet of the number of hits you’ll find when looking for login.keychain in memory. Since this file gets accessed frequently by the keychain you can expect to run into it a lot. Look close and you’ll see the command “a∼upload /Users/Library/Keychains/login.keychain”. We can’t be entirely sure what this does but earlier we did discover that the attacker malware has a built in upload function. This is likely a sign that the attacker collected the keychain using it.

Let’s see if any other password attacks were successful on this system. Here is a Yara rule made up of a few strings you might find in memory from Dave Grohl, KeychainDump, and Metasploit.

What command would reveal the most information about groups that user named Bob belongs to?

What command would reveal the most information about groups that user named Bob belongs to?

Let’s use this rule to scan memory.

What command would reveal the most information about groups that user named Bob belongs to?

This scan first returns a warning that the regex string we used to find a 48 character alphanumeric masterkey string is slowing down the Yara scan. It ends up paying off as we receive a positive hit on the keychaindump rule. Let’s look in memory to see what this hit is. We should be able to grep for any string that’s in the keychaindump rule.

What command would reveal the most information about groups that user named Bob belongs to?

Let’s expand this using grep -n.

What command would reveal the most information about groups that user named Bob belongs to?

We can see in memory that keychain dump did find a wrapping key, but based on the output that we’re seeing it did not seem to find any plain text passwords. This could be because none were cached in the securityd process at the time.

This analysis still leaves us wondering whether or not the attacker was able to recover plain text passwords, but we can see he was actively trying.

There is another question that we haven’t answered yet. How was the attacker able to gain root privileges in the first place? In the malware tree we have created over the past few chapters we’ve seen a bash instance running as root that’s communicating with localhost on port 1583. Take a quick look at the ps aux output again.

What command would reveal the most information about groups that user named Bob belongs to?

It’s interesting that a backdoor would run a bash session using the sudo command. This would commonly imply that the attacker knows the user’s password. Let’s take a look for the “bash -i >& /dev/tcp/127.0.0.1/1583 0>&1” string in memory.

What command would reveal the most information about groups that user named Bob belongs to?

And once again, we will search the surrounding lines egrep -n.

What command would reveal the most information about groups that user named Bob belongs to?

Here we see a snippet of python code that performs a sudo piggybacking attempt. This code first runs “sudo -K” to delete the user’s sudo directory and then monitors /var/db/sudo for updates. When the folder is updated, the python code will use sudo to execute a remote bash session as root on the localhost over port 1583. It looks like this is how our attacker obtained root privileges (Fig. 8.6).

What command would reveal the most information about groups that user named Bob belongs to?

Figure 8.6.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012804456800008X

Live traffic analytics using “Security Onion”

Chris Chapman, in Network Performance and Security, 2016

Updating Security Onion Appliance

Security Onion has made updating very easy. In the terminal window, type in the following command (Fig. 9.10):

What command would reveal the most information about groups that user named Bob belongs to?

Figure 9.10. Update SecuityOnion Using “soup.”

What command would reveal the most information about groups that user named Bob belongs to?

When asked to continue, press “Enter.”

“Soup” will update Ubuntu core, and all security packages and helper application in the appliance. It is recommended that after each update, you reboot the appliance using the terminal command “sudo reboot.” Note the script will prompt you to reboot.

Because of the rapidly changing nature of attacks, patches and definition updates, you should set aside a maintenance window of at least once per week or even once per day. Here, you will run update scripts. As a best practice, out of date security tools or definitions can have a substantial negative impact on your security policy. It is best to start with the server than work out to the sensors.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128035849000093

Linux SSH

In Next Generation SSH2 Implementation, 2009

Solutions Fast Track

Installing OpenSSH Server

What command would reveal the most information about groups that user named Bob belongs to?

Installing OpenSSH on most distributions is simple when using the distribution's package manager.

What command would reveal the most information about groups that user named Bob belongs to?

Ubuntu uses apt as a package manager. To install OpenSSH, just type “sudo apt-get install ssh”.

Controlling Your SSH Server

What command would reveal the most information about groups that user named Bob belongs to?

Controlling the SSH daemon is done by using the sudo command in conjunction with the path to the init script in /etc/init.d/.

What command would reveal the most information about groups that user named Bob belongs to?

The options for the service are Start, Stop, Reload, Force-Reload, Restart, and Try-Restart.

Configuring SSH to Ease Your Paranoia

What command would reveal the most information about groups that user named Bob belongs to?

Configuration of the SSH service is controlled by the file sshd_config.

What command would reveal the most information about groups that user named Bob belongs to?

SSH Protocol 1 is inherently insecure and its use should be prohibited by the config file.

What command would reveal the most information about groups that user named Bob belongs to?

Root user access over SSH should be restricted in systems in which the root user has login privileges.

What command would reveal the most information about groups that user named Bob belongs to?

Binding SSH to a non-standard port can make it more difficult for people of malicious intent to connect.

What command would reveal the most information about groups that user named Bob belongs to?

Using the hosts.allow and hosts.deny files can give you granular control of the networks and hosts from which your server is accessed.

What command would reveal the most information about groups that user named Bob belongs to?

Binding SSH to a specific address or interface can help reduce the attack surface of a server.

What command would reveal the most information about groups that user named Bob belongs to?

There are many options in the sshd_config file and each administrator will have to find a balance of security, usability, and visibility.

Using SSH

What command would reveal the most information about groups that user named Bob belongs to?

SSH was created as a replacement for Unix tools that did not have strong authentication and encryption.

What command would reveal the most information about groups that user named Bob belongs to?

SSH can be used to log in to a remote system, transfer files, and run remote commands.

What command would reveal the most information about groups that user named Bob belongs to?

SSH can be used in scripts to run remote commands on multiple systems.

What command would reveal the most information about groups that user named Bob belongs to?

There are many tools for Windows that allow an administrator to manage a Linux server via SSH.

Additional Avenues of Approach

What command would reveal the most information about groups that user named Bob belongs to?

There are many other features of SSH, such as X11 forwarding, using personal keys for authentication, and installing from source code.

What command would reveal the most information about groups that user named Bob belongs to?

OpenSSH is just one package among many. There are several options for both SSH servers and clients.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749283600009X

Configuring Kali Linux

James Broad, Andrew Bindner, in Hacking with Kali, 2014

Configure and Access External Media

Accessing external media like hard drives or thumb drives is much easier in Kali Linux than in earlier versions of Backtrack. Generally media connected to the system using a universal serial bus (USB) connector will be detected and made available by the operating system. However if this does not happen automatically, manually mounting the drive may be necessary.

Manually Mounting a Drive

The first thing that must be done when manually mounting a drive to Kali Linux is to connect the physical drive to the computer. Next open a command prompt and create a mount point. To create the mount point permissions for the account being used will need to be elevated, this can be done with the sudo command if the root account is not being used. The following command will create a mount point called newdrive in the media directory.

mkdir /media/newdrive

Determine the drive and partition you are connecting using the fdisk command with details on the drive you are attaching. The first hard drive will normally be hda, and the first partition on this drive will be hda1. This sequence continues with additional drives connected to the computer with the second being hdb and the third being hdc. Most of the time, the primary internal drive will be labeled hda so the first external drive will be labeled hdb. To mount the first partition of hdb to the newdrive directory created in the last step use the following command.

mount /dev/hdb1 /media/newdrive

Once this is complete, the contents of the drive will be available by navigating to the newdrive directory.

cd /media/newdrive

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077492000045

Incident Response Basics

Jaron Bradley, in OS X Incident Response, 2016

Root versus nonroot

When running your IR collection scripts you run them as the root user. Since this book focuses on IR and not corporate forensics, it’s assumed that the user will work with you to execute collection scripts. When run as the standard user you will be denied access to some of the most significant artifacts such as some opened ports, installed drivers, scheduled tasks, and dumping of memory. For this reason, the IR collection script should be executed either while logged in as root or with sudo command in front of it. To login as the root user you must first find a user that has sudo permissions. This is usually the owner of the machine in which you’ll be responding to. After you’ve found a user capable of root, you can authenticate with that user’s assistance.

What command would reveal the most information about groups that user named Bob belongs to?

or simply run your script with the sudo prefix

What command would reveal the most information about groups that user named Bob belongs to?

All of this to say, your IR script should check to ensure that it is being run as root. Every user on the system has unique user id (UID). When you create a new user, the UID is assigned automatically unless manually specified. However, the root UID will always be 0. We can check if the user is running as root by checking the current UID. Here is one of many different ways to do so using bash.

What command would reveal the most information about groups that user named Bob belongs to?

It is good practice to clear the sudo cache with “sudo -k” first thing after starting the script as root. This is because anyone else logged on to the system under your UID will be able to login as root for the next 5 min after you have successfully authenticated. This process is covered in detail in the privilege escalation chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044568000029

Powering The Deck

Philip Polstra, in Hacking and Penetration Testing with Low Power Devices, 2015

Looking for Vulnerabilities

Now that we have identified what hosts and services are out there, we can take the next logical step and determine if any of these services are vulnerable to attack. There are a number of general-purpose vulnerability and specialized vulnerability scanners available. We will start with OpenVAS for this penetration test.

The OpenVAS server process must be started if it is not already running. It is recommended to not start this service by default as it can consume a lot of resources. The server is easily started via the command sudo service openvas-server start. This command might take a while. If you are connected to the Internet, OpenVAS will attempt to update itself when started.

If you have not already set up an OpenVAS user, this is easily accomplished by running openvas-adduser and responding to the prompts. The OpenVAS graphical client is started using openvas-client &. The OpenVAS client is shown in Figure 5.13.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 5.13. OpenVAS client.

Once your OpenVAS client is connected, you can create a new scan by selecting Scan Assistant from the File menu. This will lead you through selecting targets, etc. for your scan. Entering only the targets of interest will greatly speedup the scan. The OpenVAS scan assistant is shown in Figure 5.14.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 5.14. OpenVAS scan assistant.

It may take a long time to run a scan against multiple targets. OpenVAS first performs a port scan on each target to find services and then checks for known vulnerabilities. Once the scan is complete, a report will be generated. Figure 5.15 displays the report screen for the PFE network scan. Reports can be exported to multiple formats including text, HTML, and PDF. The scan uncovered 11, 4, and 68 high-, medium-, and low-priority security problems, respectively.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 5.15. OpenVAS scan report for PFE network.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007518000054

The Practice of Applied Network Security Monitoring

Chris Sanders, in Applied Network Security Monitoring, 2014

Testing Security Onion

The fastest way to ensure that NSM services on Security Onion are running is to force Snort to generate an alert from one of its rules. Prior to doing this, I like to update the rule set used by Snort. You can do this by issuing the command sudo rule-update. This will used the PulledPork utility to download the latest set of rules from Emerging Threats, generate a new sid-map (used to map rule names to their unique identifiers) and restart Snort so that the new rules are applied. The partial output of this command is shown in Figure 1.4.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 1.4. Output of the Rule Update

To test the functionality of the NSM services, launch Snorby by selecting the Snorby icon on the desktop. You will be prompted to login with the e-mail address and password you provided during the setup process. Next, click the “Events” tab at the top of the screen. At this point, it’s likely this window will be empty.

In order to generate a Snort alert, open another tab within the browser window and browse to http://www.testmyids.com.

Now, if you switch back over to the tab with Snorby opened and refresh the Events page, you should see an alert listed with the event signature “GPL ATTACK_RESPONSE id check returned root” (Figure 1.5). If you see this alert, then congratulations! You’ve successfully setup your first NSM environment with Security Onion! Feel free to examine the alert by clicking on it and viewing the output in Snorby. We will return to examine Snorby more closely in later chapters.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 1.5. The Test Snort Alert Shown in Snorby

This alert should appear pretty quickly, but if you don’t see it after a few minutes, then something isn’t working correctly. You should reference the Security Onion website for troubleshooting steps, and if you are still running into trouble you should try the Security Onion mailing list or their IRC channel #securityonion on Freenode.

These processes are up to date as of Security Onion 12.04, which was the newest version available during the writing of this book. If you find that this process has changed since the book’s writing, then you should reference the SO wiki for up to date procedures: https://code.google.com/p/security-onion/w/list. We will come back to Security Onion many times throughout the course of this book, but if you’d like to learn more about it in the meantime, the SO wiki is the best resource.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124172081000015

Packet String Data

Chris Sanders, Jason Smith, in Applied Network Security Monitoring, 2014

Viewing PSTR Data

As with all NSM data, a proper PSTR data solution requires a synergy between the collection and viewing mechanisms. A more customized collection mechanism can require more unique parsing methods. In this section we will examine potential solutions that can be used to parse, view, and interact with PSTR data using several of the data formatting examples we’ve already created.

Logstash

Logstash is a popular log parsing engine that allows for both multi-line and single line logs of various types, including common formats like syslog and JSON formatted logs, as well as the ability to parse custom logs. As a free and open-source tool, it is an incredibly powerful log collector that is relatively easy to set up in large environments. As an example, we will configure Logstash to parse logs that are being collected with URLsnarf. Since Logstash 1.2.1 was released, it includes the Kibana interface for viewing logs, so we’ll also discuss some of its features that can be used for querying the data you need, without getting the data you don’t.

Logstash isn’t included in Security Onion, so if you want to follow along you will need to download it from the project website at www.logstash.net. Logstash is contained entirely in one java package, so you’ll need the Java Runtime Environment (JRE) installed (http://openjdk.java.net/install/, or simply sudo apt-get install java-default). At this point, you can simply execute the program.

In order to parse any type of data, Logstash requires a configuration file that defines how it will receive that data. In a real world scenario, you will probably have a steady stream of data rolling in from a logging source, so in this example, we’ll look at data being written to a specific location. In this example, we’ll call the configuration file urlsnarf-parse.conf. This is a very simple configuration:

input {

 file {

 type = > “urlsnarf”

 path = > “/home/idsusr/urlsnarf.log”

 }

}

output {

 elasticsearch { embedded = > true }

}

This configuration tells Logstash to listen to data of any kind being written to /home/idsusr/urlsnarf.log and to consider any log written to that file to be a “urlsnarf” type of log, which is the log type we are defining. The output section of this configuration file starts an Elasticsearch instance inside of Logstash to allow for indexing and searching of the received data.

Once we have a configuration file, we can start up Logstash to initiate the log listener for when we start generating data. To begin Logstash with the Kibana web front end enabled, issue this command;

java -jar logstash-1.2.1-flatjar.jar agent -f urlsnarf-parse.conf -- web

The output of this command is shown in Figure 6.11.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.11. Executing Logstash

This command will initiate the agent, specifying urlsnarf-parse.conf with the –f option. Ending the command with “ -- web “ will ensure that Kibana is started along with the logging agent. The initial startup can take a minute, and since the Logstash output isn’t too verbose, you can verify that Logstash is running by invoking netstat on the system.

sudo netstat –antp | grep java

If everything is running properly, you should see several ports initiated by the java service opened up. This is shown in Figure 6.12.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.12. These Open Ports Indicate Logstash is Running Properly

Once these are running, go ahead and confirm that the Kibana front end is functioning by visiting http://127.0.0.1:9292 in your web browser, replacing 127.0.0.1 with the IP address of the system you’ve installed Logstash on. This will take you directly to the main Kibana dashboard.

Caution

If you’ve installed Logstash on a Security Onion system and are attempting to access the Kibana web interface from another system (such as your Virtual Machine host system), you will not be able to by default. This is because of the firewall enabled on the system. You can add an exception to the firewall with this command: sudo ufw allow 9292/tcp

Now that Logstash is listening and the Kibana front-end is functional, you can send data to the file specified in urlsnarf-parse.conf. To create data to parse, you can use your existing installation of the Dsniff tool set and start URLsnarf, sending its output data to a file.

sudo urlsnarf > /home/idsusr/urlsnarf.log

After URLsnarf is initialized, open a web browser (or use curl from the command line) and visit a few sites to generate some data. Once you’ve finished, use Ctrl + C to end the URLsnarf process. After stopping the data collection, go back to the Kibana front end and confirm that logs are arriving in the browser. If they are, you should see some data displayed on the screen, similar to Figure 6.13. If they are not, try making sure you’ve selected the correct time span towards the top of the dashboard.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.13. Viewing Log Data in Kibana

This figure represents “raw” log files that are being ingested, which are for the most part unparsed. So far, if you examine a log, only the timestamp in which it arrived and the hostname of the current device are present. This is because you haven’t specified a filter in the Logstash configuration so that it knows how to parse the individual fields within each log entry. These filters make up the meat of the configuration and define how logs are indexed.

With that said, let’s extend the flexibility of Logstash by defining custom filters to generate stateful information so that Kibana can really stretch its legs. Logstash uses GROK to combine text patterns and regular expressions to match log text in the order that you wish. GROK is a powerful language used by Logstash to make parsing easier than it would normally be when using regular expressions. We will address getting a stateful understanding of the URLsnarf log format shortly, but let’s start with a simpler example in order to understand the syntax. In this example we’ll create a filter that matches text fields in a log that we generated with Justniffer in Figure 6.14, but this time with the addition of a “sensor name” at the end.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.14. Custom Justniffer Data with a Sensor Name to be Parsed

To show how Logstash handles basic matches as opposed to prebuilt patterns, we’ll use a “match” filter in the configuration. The basic configuration containing match filters should look like this;

input {

 file {

 type = > “Justniffer-Logs”

 path = > “/home/idsusr/justniffer.log”

 }

}

filter {

 grok {

 type = > “Justniffer-Logs”

 match = > [ “message”, “insertfilterhere” ]

 }

}

output {

 elasticsearch { embedded = > true }

}

We’ll use the existing built-in GROK patterns to generate the data we need for the configuration, which we’ll call justniffer-parse.conf. These patterns can be found at https://github.com/logstash/logstash/blob/master/patterns/grok-patterns. But before we start examining which patterns we want to tie together, the first thing to do is look at the log format and define what fields we want to identify. This data format breaks down like this:

datestamp timestamp – IP - > IP – domain/path – sensorname SENSOR

Now we need to translate this into GROK, which is where the GROK debugger comes in. The debugger is located at http://grokdebug.herokuapp.com/. Here you simply place the log string you want to match in the top line, and in the pattern line enter the GROK pattern you think will match it. The application will show you which data is matched. The key when developing GROK formatted strings is to start with small patterns and extend them gradually to match the entire log line (Figure 6.15).

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.15. Using the Grok Debugger

In order to match the log line we are working with, we will use this pattern:

%{DATE:date} %{TIME:time} - %{IP:sourceIP} - > %{IP:destIP} - %{URIHOST:domain}%{URIPATHPARAM:request} - %{DATA:sensor} SENSOR

You’ll notice we included field labels next to each field, which will identify the fields. Applying the filter to the full configuration file gives us a complete configuration that will parse all incoming Justniffer logs matching the format we specified earlier. This is our resulting configuration file:

input {

 file {

 type = > “Justniffer-Logs”

 path = > “/home/idsusr/justniffer.log”

 }

}

filter {

 grok {

 type = > “Justniffer-Logs”

 match = > [ “message", “%{DATE:date} %{TIME:time} - %{IP:sourceIP} - > %{IP:destIP} - %{URIHOST:domain}%{URIPATHPARAM:request} - %{DATA:sensor} SENSOR” ]

 }

}

output {

 elasticsearch { embedded = > true }

}

Once you have this configuration, you can go ahead and start the Logstash collector with this command that uses our new configuration file:

java -jar logstash-1.2.1-flatjar.jar agent -f justniffer-parse.conf --web

When Logstash is up and running, you can start gathering data with the following Justniffer command that will generate log data in the format matching the configuration we’ve just created:

sudo justniffer -p “tcp port 80” -u -l “%request.timestamp - %source.ip - > %dest.ip - %request.header.host%request.url - IDS1 SENSOR” >> /home/idsusr/justniffer.log

Once running, you will once again want to browse to a few websites in order to generate logs. As you gather data, check back into Kibana and see if your logs are showing up. If everything has gone correctly, you should have fully parsed custom logs! Along with viewing these fully parsed logs, you can easily search through them in Kibana’s “Query” field at the bottom of the main dashboard page, or you can narrow down the display parameters to define only the fields you wish to see with the “Fields” Event filter to the left of the query field, shown in Figure 6.16.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.16. Examining Individual Logs in Kibana

You can also examine metrics for a given field by clicking the field name in the list on the left side of the screen. Figure 6.17 shows field metrics for the Host field, which shows all of the hosts visited in the current logs.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.17. Examining Field Metrics in Kibana

This Justniffer log example provides an excellent way to dive into custom parsing of logs with Logstash. However, some log types will be more extensive and difficult to parse. For instance, if we examine URLsnarf logs, we see that they are nearly identical to Apache access logs, with the exception of a character or two. While Logstash would normally be able to handle Apache access logs with ease, these additional characters can break the built-in filters. For this example, we will look at creating our own GROK filter for replacing the existing filter pattern for Apache access logs in order to adequately parse the URLsnarf logs. Our new filter will take into account the difference and relieve the incongruity created by the additional hyphens. Since the filters are so similar to the built-in pattern, we can manipulate this pattern as needed. The latest GROK patterns can be found at the Logstash GIT repository, https://github.com/logstash/logstash/blob/master/patterns/grok-patterns. If you examine the COMBINEDAPACHELOG filter carefully, you’ll see the issue falls with the lack of a simple hyphen, which has been added below.

COMBINEDAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})” %{NUMBER:response}|- (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}

The above filter looks complicated, and that’s because it is. The break down of it is an exercise best left for the GROK debugger. Our changes to the original filter include correcting the hyphen and commented out the inner quotation marks. We can add this GROK filter into the base configuration we created earlier, resulting in this completed configuration file:

input {

 file {

 type = > “urlsnarf”

 path = > “/home/idsusr/urlsnarf.log”

 }

}

filter {

 grok {

 type = > “urlsnarf”

 match = > [ “message", “%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request} (?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\” (%{NUMBER:response}|-) (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}” ]

 }

}

output {

 elasticsearch { embedded = > true }

}

Without using a GROK filter, these logs would look like Figure 6.18 in Kibana, with most of the data appearing as a single line that doesn’t allow for any advanced analytics based upon fields.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.18. The Log Data Without GROK

The new log field description is fully parsed using the filter as seen in Figure 6.19.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.19. The New Log Data with GROK

As you can see, the combination of Logstash, Kibana, and GROK makes a powerful trio that is convenient for parsing logs like the ones generated by PSTR data. If you want to learn more about these tools, you should visit the Logstash website at http://logstash.net/.

Raw Text Parsing with BASH Tools

The combination of Logstash and Kibana is an excellent way to parse single line PSTR data, but those tools might not be the best fit in every environment. Depending on how you are sourcing your data, you might find yourself in need of a broader toolset. Even in cases where log search utilities are present, I always recommend that whenever flat text logs are being used, they should be accessible by analysts directly in some form. In the following examples, we’ll take a look at sample PSTR data that includes multi-line request and response headers.

Earlier we generated PSTR data with Justniffer, and for this example, we will start by doing it again:

sudo justniffer -i eth0 -p “tcp port 80” -u -l “------------------------------- %newline%request.timestamp - %source.ip - > %dest.ip %newline%request.header%newline%response.timestamp - %newline%response.header” > pstrtest.log

This should generate data that looks similar to what is shown in Figure 6.7, and store that data in a file named pstrtest.log.

Parsing raw data with BASH tools such as sed, awk, and grep can sometimes carry a mystical aura of fear that is not entirely deserved. After all, parsing this kind of text is one of the most documented and discussed topics in Unix related forums, and I have yet to come across an unresolvable parsing issue. From the example data above, we can gather a significant amount of useful information for analysis. From a tool perspective, we can search and parse this with grep quite easily. For instance we can search for every Host seen in the data set by performing a simple search for the “Host” field, like this:

cat pstrtest.log | grep “Host:”

This will result in printing every line that contains the word “Host:” in any context, even if it is not the context you wish for. To make sure that it is looking for only lines beginning with the term “Host:”, try extending grep with the –e option and the carrot (∧) symbol:.

cat pstrtest.log | grep -e “∧Host: “

The carrot symbol matches “beginning of a line”, and for every line that has “Host: “ after the beginning of the line, it will match. Currently, this search is case sensitive. To make it case insensitive, add the –i option. Searching with grep is the easiest and most common use for the tool, however, it can be extended to perform powerful regular expression searching, parsing, and massaging of data. For instance, let’s consider searching for Etags of a very specific format, as shown in Figure 6.20.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.20. Using Grep to Search for Etags in PSTR Data

You’ll notice that while most of these entries share similar formats, some will contain additional characters, such having more than one hyphen (-). The fifth line in Figure 6.14 is an example of this, so let’s search for examples matching it. In theory, we are searching for all lines starting with the text “ETag”, and followed by a specific value with two hyphens. We will print only the ETags themselves. The following command will accomplish this goal:

cat pstrtest.log | grep -e “^ETag” | grep -oP “\”.*?\-.*?\-.*?\“” | sed 's/“//g'

Despite what appears to be a rather complicated command, it does exactly what we asked. Since this one-liner has multiple elements, let’s break them down individually:

1.

cat pstrtest.log

First, we dump to output of the pstrtest.log file to the screen (standard output)

2.

grep –e “∧ETag”

Next, we pipe the output of the file to grep, where we search for lines containing the text “ETag” at the beginning of a line.

3.

grep -oP “\”.*?\-.*?\-.*?\“”

The ETags that are found are piped to another grep command that utilizes a regular expression to locate data in the proper format. This format is any number of characters (.*?) between a quote and a hyphen, followed by any number of characters between that hyphen and another, followed by any number of characters and another quote.

4.

sed ‘s/”//g’

Next, we pipe the output of the last Grep command to Sed to remove any quotation marks from the output.

In this example, we introduced Sed into the equation. The sed command is useful for searching and replacing text. In this case, it looks at every line, and replaces every instance of double quotes () with nothing. More simply put, it removes all double quotes. The output of this command is shown in Figure 6.21.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.21. The Output of Specific ETag Results from PSTR Data

Another useful way to massage data is to simply sort and count what you have. This might sound like a simple task, and it is, but it is incredibly useful. For example, let’s take a look at the User Agent string in the HTTP header information that can be contained within PSTR data. We can perform some rudimentary detection by sorting these User Agent strings from least to most visited. This can often times reveal suspicious activity and possible indicators due to user agent strings that are unexpected.

cat pstrtest.log | grep -e “^User Agent: “ | sort | uniq -c | sort –n

In this example we have taken our PSTR data and outputted only lines beginning with “User Agent:”. From here, we pipe this data to the sort command to order the results. This data is then piped to the uniq command, which parses the data by counting each uniq line and providing the total number of times it occurs in a left column. Finally, we pipe that data once more to the sort command and utilize the –n string to sort the data by the count of occurrences. We are left with the data shown in Figure 6.22.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 6.22. Sorted User Agent Data

Analyzing this data immediately reveals that a few unique and potentially suspicious user agents exist in this communication. From that point, you could perform a more thorough investigation surrounding this communication. This is an example of generating some basic statistical data from PSTR data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124172081000064

Installing a base operating system

Philip Polstra, in Hacking and Penetration Testing with Low Power Devices, 2015

Ubuntu

Ubuntu and its derivatives are extremely popular. Ubuntu has occupied one of the top spots at DistroWatch for several years (http://distrowatch.com). Ubuntu, which debuted in 2004, is maintained by Mark Shuttleworth's company, Canonical (http://ubuntu.com). Canonical claims that Ubuntu is the most popular open-source operating system in the world. The word ubuntu describes a South African philosophy, which encourages people to work together as a community. Unlike Debian on which it is based, new versions of Ubuntu are released every six months. Many consider Ubuntu to be one of the easiest Linux distributions for beginners. Ubuntu's attributes are summarized in Table 3.9.

Table 3.9. Ubuntu

Performance Good—supports ARMv7 with hard float
Package manager Aptitude/dpkg
Desktop application repository support Very good
Hacking application repository support Very good
Community support Excellent
Configuration Standard tools
Comments According to Canonical, Ubuntu is the world's most popular Linux distribution. Thanks to a few individuals it is well supported on the Beagles

Because it is so popular, Ubuntu enjoys excellent repository support. The Ubuntu package manager, apt (advanced packing tool), is extremely easy to use. Installing a new package is as easy as running sudo apt-get install < package name > from a shell. Updating all the packages on a system is a simple matter of updating the local repository information and then installing any available update using the command sudo apt-get update && sudo apt-get upgrade. If you are unsure of an exact package name or think a utility might be contained within another package, you can find the correct package name by executing the command apt-cache search < package or utility name >. Graphical and text-based frontends are also provided to make package management even easier.

While there are numerous windowing systems available for Linux systems, the two primary window managers in widespread use have been Gnome and KDE for several years. Both systems have their dedicated followers. Canonical has developed their own windowing system known as Unity. Not surprisingly, some of the KDE and Gnome zealots don't like Unity. Kubuntu is available for users who prefer KDE and still want to run Ubuntu (http://kubuntu.org). This book is being written with LibreOffice and other open-source tools running on a Kubuntu system. Ubuntu Gnome is available for those that prefer Gnome (http://ubuntugnome.org).

Unity, KDE, and Gnome are all a bit large to run on our Beagles with their limited RAM. One of the lightweight windowing systems is typically used on the Beagles and low-powered desktops. When a lightweight desktop is used for a desktop system, the distribution is renamed. For example, Xubuntu is a version of Ubuntu with the Xfce desktop (http://xubuntu.org). When running on an ARM-based system, we normally just say we are running Ubuntu, even though we are not running the Unity desktop.

There are a number of options when running Ubuntu on our Beagles. We can choose a major version, variant within that version, and a particular kernel. Due to some recent changes in Ubuntu and the Linux kernel, these choices are not as trivial as they first sound. Newer devices, such as the BeagleBone Black, only support later Ubuntu and kernel versions, which are somewhat incompatible with previous versions. This will be discussed in-depth following our discussion on what makes a good penetration testing Linux distribution.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007518000030

Infrastructure as a Service

Dinkar Sitaram, Geetha Manjunath, in Moving To The Cloud, 2012

Overview of Amazon EC2

Amazon EC2 allows enterprises to define a virtual server, with virtual storage and virtual networking. As the computational needs of an enterprise can vary greatly, some applications may be compute-intensive, and other applications may stress storage. Certain enterprise applications may need certain software environments and other applications may need computational clusters to run efficiently. Networking requirements may also vary greatly. This diversity in the compute hardware, with automatic maintenance and ability to handle the scale, makes EC2 a unique platform.

Accessing EC2 Using AWS Console

As with S3, EC2 can be accessed via the Amazon Web Services console at http://aws.amazon.com/console. Figure 2.7 shows the EC2 Console Dashboard, which can be used to create an instance (a compute resource), check status of user's instances and even terminate an instance. Clicking on the “Launch Instance” button takes the user to the screen shown in Figure 2.8, where a set of supported operating system images (called Amazon Machine Images, AMI) are shown to choose from. More on types of AMI and how one should choose the right one are described in later sections in this chapter. Once the image is chosen, the EC2 instance wizard pops up (Figure 2.9) to help the user set further options for the instance, such as the specific OS kernel version to use, whether to enable monitoring (using the CloudWatch tool described in Chapter 8) and so on. Next, the user has to create at least one key-value pair that is needed to securely connect to the instance. Follow the instructions to create a key-pair and save the file (say my_keypair.pem) in a safe place. The user can reuse an already created key-pair in case the user has many instances (it is analogous to using the same username-password to access many machines). Next, the security groups for the instance can be set to ensure the required network ports are open or blocked for the instance. For example, choosing the “web server” configuration will enable port 80 (the default HTTP port). More advanced firewall rules can be set as well. The final screen before launching the instance is shown in Figure 2.10. Launching the instance gives a public DNS name that the user can use to login remotely and use as if the cloud server was on the same network as the client machine.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 2.7. AWS EC2 console.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 2.8. Creating an EC2 instance using the AWS console.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 2.9. The EC2 instance wizard.

What command would reveal the most information about groups that user named Bob belongs to?

Figure 2.10. Parameters that can be enabled for a simple EC2 instance.

For example, to start using the machine from a Linux client, the user gives the following command from the directory where the key-pair file was saved. After a few confirmation screens, the user is logged into the machine to use any Linux command. For root access the user needs to use the sudo command.

ssh -i my_keypair.pem ec2-67-202-62-112.compute-1.amazonaws.com

For Windows, the user needs to open the my_keypair.pem file and use the “Get Windows Password” button on the AWS Instance page. The console returns the administrator password that can be used to connect to the instance using a Remote Desktop application (usually available at Start-> All Programs -> Accessories -> Remote Desktop Connection).

A description of how to use the AWS EC2 Console to request the computational, storage and networking resources needed to set up and launch a web server is described in the Simple EC2 example: Setting up a Web Server section of this chapter.

Accessing EC2 Using Command Line Tools

Amazon also provides a command line interface to EC2 that uses the EC2 API to implement specialized operations that cannot be performed with the AWS console. The following briefly describes how to install and set up the command line utilities. More details are found in Amazon Elastic Compute Cloud User Guide [5]. The details of the command line tools are found in Amazon Elastic Compute Cloud Command Line Reference [6].

Note

Installing EC2 command line tools

Download tools

Set environment variables (e.g., location of JRE)

Set security environment (e.g., get certificate)

Set region

Download tools: The EC2 command line utilities can be downloaded from Amazon EC2 API Tools [7] as a Zip file. They are written in Java, and hence will run on Linux, Unix, and Windows if the appropriate JRE is available. In order to use them simply unpack the file, and then set appropriate environment variables, depending upon the operating system being used. These environment variables can also be set as parameters to the command.

Set environment variables: The first command sets the environment variable that specifies the directory in which the Java runtime resides. PATHNAME should be the full pathname of the directory where the java.exe file can be found. The second command specifies the directory where the EC2 tools reside; TOOLS_PATHNAME should be set to the full pathname of the directory named ec2-api-tools-A.B-nnn into which the tools were unzipped (A, B and nnn are some digits that differ based on the version used). The third command sets the executable path to include the directory where the EC2 command utilities are present.

For Linux:

$export JAVA_HOME=PATHNAME

$export EC2_TOOLS=TOOLS_PATHNAME

$export PATH=$PATH:$EC2_HOME/bin

For Windows:

C:\>SET JAVA_HOME=PATHNAME

C:\>SET EC2_TOOLS=TOOLS_PATHNAME

C:\>SET PATH=%PATH%,%EC2_HOME%\bin

Set up security environment: The next step is to set up the environment so that the EC2 command line utilities can authenticate to AWS during each interaction. To do this, it is necessary to download an X.509 certificate and private key that authenticates HTTP requests to Amazon. The X.509 certificate can be generated by clicking on the “Account” link shown in Figure 2.7, clicking on the “Security Credentials” link that is displayed, and following the given instructions to create a new certificate. The certificate files should be downloaded to a .ec2 directory in the home directory on Linux/Unix, and C:\ec2 on Windows, without changing their names. The following commands are to be executed to set up the environment; both Linux and Windows commands are given. Here, f1.pem is the certificate file downloaded from EC2.

$export EC2-CERT=~/.ec2/f1.pem

or

C:\> set EC2-CERT=~/.ec2/f1.pem

Set region: It is necessary to next set the region that the EC2 command tools interact with – i.e., the location in which the EC2 virtual machines would be created. AWS regions are described in a subsequent section titled S3 Administration. In brief, each region represents an AWS data center, and AWS pricing varies by region. The command ec2-describe-regions can be issued at this point to test the installation of the EC2 command tools and list the available regions.

The default region used is the US-East region “us-east-1” with service endpoint URL http://ec2.us-east-1.amazonaws.com, but can be set to any specific end point using the following command, where ENDPOINT_URL is formed from the region name as illustrated for the “us-east-1”.

$export EC2-URL=https://<ENDPOINT_URL>

Or

C:\> set EC2-URL =https://<ENDPOINT_URL>

A later section explains how developers can use the EC2 and S3 APIs to set up a web application in order to implement a simple publishing portal such as the Pustak Portal (running example used in this book). Before that one needs to understand more about what a computation resource is and the parameters that one can configure for each such resource, described in the next section.

EC2 Computational Resources

This section gives a brief overview of the computational resources available on EC2 first, followed by the storage and network resources, more details of which are available at EC2 Introduction [8].

Computing resources: The computing resources available on EC2, referred to as EC2 instances, consist of combinations of computing power, together with other resources such as memory. Amazon measures the computing power of an EC2 instance in terms of EC2 Compute Units [9]. An EC2 Compute Unit (CU) is a standard measure of computing power in the same way that bytes are a standard measure of storage. One EC2 CU provides the same amount of computing power as a 1.0–1.2 GHz Opteron or Xeon processor in 2007. Thus, if a developer requests a computing resource of 1 EC2 CU, and the resource is allocated on a 2.4 GHz processor, they may get 50% of the CPU. This allows developers to request standard amounts of CPU power regardless of the physical hardware.

The EC2 instances that Amazon recommends for most applications belong to the Standard Instance family [8]. The characteristics of this family are shown in Table 2.1, EC2 Standard Instance Types. A developer can request a computing resource of one of the instance types shown in the table (e.g., a Small computing instance, which would have the characteristics shown). Figure 2.8 showed how one can do this using the AWS console. Selection of local storage is discussed later in the section titled EC2 Storage Resources.

Table 2.1. EC2 Standard Instance Types

Instance TypeCompute CapacityMemoryLocal StoragePlatform
Small 1 virtual core of 1 CU 1.7GB 160GB 32-bit
Large 2 virtual cores, 2 CU each 7.5GB 850GB 64-bit
Extra Large 4 virtual cores, 2 CU each 15GB 1690GB 64-bit

Other instance families available in Amazon at the time of writing this book include the High-Memory Instance family, suitable for databases and other memory-hungry applications; the High-CPU Instance family for compute-intensive applications; the Cluster-Compute Instance family for High-Performance Compute (HiPC) applications, and the Cluster GPU Instance family which include Graphic Processing Units (GPUs) for HiPC applications that need GPUs [8].

Software: Amazon makes available certain standard combinations of operating system and application software in the form of Amazon Machine Images (AMIs). The required AMI has to be specified when requesting the EC2 instance, as seen earlier. The AMI running on an EC2 instance is also called the root AMI.

Operating systems available in AMIs include various flavors of Linux, such as Red Hat Enterprise Linux and SuSE, the Windows server, and Solaris. Software available includes databases such as IBM DB2, Oracle and Microsoft SQL Server. A wide variety of other application software and middleware, such as Hadoop, Apache, and Ruby on Rails, are also available [8].

There are two ways of using additional software not available in standard AMIs. It is possible to request a standard AMI, and then install the additional software needed. This AMI can then be saved as one of the available AMIs in Amazon. The other method is to import a VMware image as an AMI using the ec2-import-instance and ec2-import-disk-image commands. For more details of how to do this, the reader is referred to [9].

Regions and Availability Zones: EC2 offers regions, which are the same as the S3 regions described in the section S3 Administration. Within a region, there are multiple availability zones, where each availability zone corresponds to a virtual data center that is isolated (for failure purposes) from other availability zones. Thus, an enterprise that wishes to have its EC2 computing instances in Europe could select the “Europe” region when creating EC2 instances. By creating two instances in different availability zones, the enterprise could have a highly available configuration that is tolerant to failures in any one availability zone.

Load Balancing and Scaling: EC2 provides the Elastic Load Balancer, which is a service that balances the load across multiple servers. Details of its usage are in the section EC2 Example: Article Sharing in Pustak Portal. The default load balancing policy is to treat all requests as being independent. However, it is also possible to have timer-based and application controlled sessions, whereby successive requests from the same client are routed to the same server based upon time or application direction [10]. The load balancer also scales the number of servers up or down depending upon the load. This can also be used as a failover policy, since failure of a server is detected by the Elastic Load Balancer. Subsequently, if the load on the remaining server is too high, the Elastic Load Balancer could start a new server instance.

Once the compute resources are identified, one needs to set any storage resources needed. The next section describes more on the same.

Note

EC2 Storage Resources

Amazon S3: Highly available object store

Elastic Block Service: permanent block storage

Instance Storage: transient block storage

EC2 Storage Resources

As stated earlier, computing resources can be used along with associated storage and network resources in order to be useful. S3, which is the file storage offered by Amazon, has already been described in the Amazon Storage Services section. Use of the S3 files is similar to accessing an HTTP server (a web file system). However, many times an application performs multiple disk IOs and for performance and other reasons one needs to have a control on the storage configuration as well. This section describes how one can configure resources that appear to be physical disks to the EC2 server, called block storage resources. There are two types of block storage resources: Elastic Block Service, and instance storage, described next.

Elastic Block Service (EBS): In the same way that S3 provides file storage services, EBS provides a block storage service for EC2. It is possible to request an EBS disk volume of a particular size and attach this volume to one or multiple EC2 instances using the instance ID returned during the time the volume is created. Unlike the local storage assigned during the creation of an EC2 instance, the EBS volume has an existence independent of any EC2 instance, which is critical to have persistence of data, as detailed later.

Instance Storage: Every EC2 instance has local storage that can be configured as a part of the compute resource (Figure 2.8) and this is referred to as instance storage. Table 2.2 shows the default partitioning of instance storage associated with each EC2 instance for standard instance types. This instance storage is ephemeral (unlike EBS storage); i.e., it exists only as long as the EC2 instance exists, and cannot be attached to any other EC2 instance. Furthermore, if the EC2 instance is terminated, the instance storage ceases to exist. To overcome this limitation of local storage, developers can use either EBS or S3 for persistent storage and sharing.

Table 2.2. Partitioning of Local Storage in Standard EC2 Instance Types

SmallLargeExtra Large
Linux /dev/sda1: root file system
/dev/sda2: /mnt
/dev/sda3: /swap
/dev/sda1: root file system
/dev/sdb: /mnt/
dev/sdc
/dev/sdd
/dev/sde
/dev/sda1: root file system
/dev/sdb: /mnt
/dev/sdc
/dev/sdd
/dev/sde
Windows /dev/sda1: C:
xvdb
/dev/sda1: C:
xvdb
xvdc
xvdd
xvde
/dev/sda1: C:
xvdb
xvdc
xvdd
xvde

The instance AMI, configuration files and any other persistent files can be stored in S3 and during operation, a snapshot of the data can be periodically taken and sent to S3. If data needs to be shared, this can be accomplished via files stored in S3. An EBS storage can also be attached to an instance as desired. A detailed example of how one does this is described later in the context of Pustak Portal.

Table 2.3 summarizes some of the main differences and similarities between the two types of storage.

Table 2.3. Comparison of Instance Storage and EBS Storage

Instance StorageEBS storage
Creation Created by default when an EC2 instance is created Created independently of EC2 instances.
Sharing Can be attached only to EC2 instance with which it is created. Can be shared between EC2 instances.
Attachment Attached by default to S3-backed instances; can be attached to EBS-backed instances Not attached by default to any instance.
Persistence Not persistent; vanishes if EC2 instance is terminated Persistent even if EC2 instance is terminated.
S3 snapshot Can be snapshotted to S3 Can be snapshotted to S3

S3-backed instances vs. EBS-backed instances: EC2 compute and storage resources behave slightly differently depending upon whether the root AMI for the EC2 instance is stored in Amazon S3 or in Amazon Elastic Block Service (EBS). These instances are referred to as S3-backed instances and EBS-backed instances, respectively. In an S3-backed instance, the root AMI is stored in S3, which is file storage. Therefore, it must be copied to the root device in the EC2 instance before the EC2 instance can be booted. However, since instance storage is not persistent, any modifications made to the AMI of an S3-backed instance (such as patching the OS or installing additional software) will not be persistent beyond the lifetime of the instance. Furthermore, while instance storage is attached by default to an S3-backed instance (as shown in Table 2.2), instance storage is not attached by default to EBS-backed instances.

EC2 Networking Resources

In addition to compute and storage resources, network resources are also needed by applications. For networking between EC2 instances, EC2 offers both a public address as well as a private address [5]. It also offers DNS services for managing DNS names associated with these IP addressees. Access to these IP addresses is controlled by policies. The Virtual Private Cloud can be used to provide secure communication between an Intranet and the EC2 network. One can also create a complete logical sub network and expose it to public (a DMZ) with its own firewall rules. Another interesting feature of EC2 is the Elastic IP addresses which are independent of any instance, and this feature can be used to support failover of servers. These advanced features and how these can be used to set up a network are described in this section, after understanding the key terminologies next.

Note

EC2 Networking

Private and public IP addresses per instance

Elastic IP addresses not associated with any instance

Route 53 DNS that allows simple URLs (e..g. www.mywebsite.com)

Security groups for networking security policies

Instance addresses: Each EC2 instance has two IP addresses associated with it – the public IP address and the private IP address. The private IP address and DNS name can be resolved only within the EC2 cloud. For communication between EC2 instances, the internal IP addresses are most efficient, for the messages then pass entirely within the Amazon network. The public IP address and DNS name can be used for communication outside the Amazon cloud.

Elastic IP addresses: These IP addresses are independent of any instance, but are associated with a particular Amazon EC2 account and can be dynamically assigned to any instance (in which case, the public IP address is de-assigned). Therefore, they are useful for implementing failover. Upon failure of one EC2 instance, the Elastic IP address can be dynamically assigned to another EC2 instance. Unlike instance IP addresses, Elastic IP addresses are not automatically allocated; they have to be generated when needed.

Route 53: Enterprises may desire to publish a URL of the form http://www.myenterprise.com for EC2 instances. This is not possible by default, since the EC2 instances are inside the amazon.com domain. Route 53 is a DNS server that can be used to associate an Elastic IP address or public IP address with a name of the form www.myenterprise.com.

Security Groups: For networking security, it is common to define network security policies that restrict the ports through which any machine can be accessed, or the IP addresses that can access a server. The same can be achieved for EC2 instances using security groups, briefly mentioned earlier. Each security group is a collection of network security policies. Different security groups should be created for different server types; for example, the web server security group could specify that port 80 may be opened for incoming connections. The default security group when creating an EC2 instance allows the instance to connect to any outside IP address but disallows incoming connections.

Virtual Private Cloud: Enterprises that desire more control over their networking configuration can use Virtual Private Cloud (VPC). Examples of the advanced networking features offered by VPC include:

i.

the ability to allocate both public and private IP addresses to instances from any address range

ii.

the ability to divide the addresses into subnets and control the routing between subnets

iii.

the ability to connect the EC2 network with an Intranet using a VPN tunnel. Details of VPC are beyond the scope of this book and can be found in Amazon Virtual Private Cloud [11].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497251000020

What command would reveal the most information about groups that a user named Bob belongs?

To display the group(s) a user belongs to use this command: id. grep.

Which of the following commands can be used to list of all the groups that the current user is a part of Mark all the correct answers?

To display the group(s) a user belongs to use this command: grep.

Which command will display the groups a user account is a member of *?

Issue the pts membership command to display the members of a group, or the groups to which a user belongs. where user or group name or id specifies the name or AFS UID of each user for which to display group membership, or the name or AFS GID of each group for which to display the members.

What command would reveal the most information about groups that a user belongs to?

Method 2 - id command The another way to identify the groups a user is in is by using "id" command. The id command is used to print user and group information for the specified USER. If the USER is not specified, it will print the information for the current user.