Friday, December 22, 2006

Installing The Sleuthkit on Ubuntu 6.06 (Dapper)

TSK (The Sleuth Kit) is a package of Unix-based computer forensics tools. My interest atm is mainly in robust data recovery.

Usually, it's a good idea to install packages of software if you are running Ubuntu, which I am. In this case, the Ubuntu TSK package is version 2.03, and 2.07 is current as of today. Lotsa bug fixes and a few features added. So I started to install from source. There are a couple of dependencies that apt would have handled, but alas...someday I'll have to learn to create packages so I can save people some trouble.

TSK requires afflib. Afflib requires zlib and libssl.

First, install zlib
download file
tar -xzvf
cd
./configure
make
sudo make install (sudo required to install files in system directories. I understand it's bad practice to configure and make as root)

Now libssl:
sudo apt-get install libssl-dev


Now afflib
This was a pain. Fortunately, it was an unnecessary pain. There's no package available for Ubuntu 6.06 LTS. Compiling from source doesn't work either. But you don't need to - TSK just needs the code available for it's own compile. Found the following:

"The problem is that AFFLIB (for the AFF image format) requires zlib and
openssl, both of which do not seem to be included with Ubuntu by
default. You will need to install those packages and libraries. Most
systems come with those libraries, but Ubuntu does not seem to (I went
through the same pain a couple of months ago setting a system up).

Also, someone else had issues compiling Kubuntu with the version of
AFFLIB that was included in tsk 2.04, so you should probably update the
AFFLIB with the latest version:

1. Download version 1.6.26
http://www.afflib.org/downloads/afflib-1.6.26.tar.gz

2. Untar it.

3. Remove the src/afflib directory from TSK.

4. Move the afflib-1.6.26 directory to src/afflib (be sure you name it
afflib and not afflib-1.6.26).

5. Compile TSK as normal. "

So...do that. To continue with afflib:

download file
download tsk file
tar -xzvf
tar -xzvf
cd /src
rm -rf afflib
cp ./afflib

Now for TSK:
no configure, just make
This will put all the finished tools in /bin, not somewhere in the system folders. You may want to link to /usr/local/bin or some other spot. I cp'd sleuthkit-2.07 /usr/local/sleuthkit-2.07, then ln -s /usr/local/sleuthkit-2.07 /usr/local/sleuthkit

The symlink is for the convenience of apps like Autopsy (see below) so they can refer to a generic location and not be tripped up by updated versions of sleuthkit.

Now for Autopsy:
Autopsy is the html gui for TSK. You can do wonderful command-line things with just TSK, but by all accounts, you want this piece to tie it all together.

You need to know where you have sleuthkit installed, because it will ask.

tar -xzvf
cd
make
answer any questions...

follow its directions

Thursday, December 21, 2006

Installing VMWare Tools

VMware tools help the vm console and the rest of your desktop coexist. They install better video drivers, for example, and allow cut&past between console and virtual machine. VMconsole alerts you when you don't have the tools installed, so it's angst-inducing even if you don't need them. But it's not clear that the simple right-click, install vmware tools is only the beginning. What that does is prep the cdrom and provide a psueo .iso for the tools installation.

Cdrom should be set to auto detect. This can only be done when the virtual machine is powered off.

Then, for a windows guest (vmware term for virtual machine) you need to be logged in with admin rights and either manually launch the setup routine from your virtualized cdrom or if autorun is set (which it shouldn't be!) that will handle it for you.

Not a big deal, but I found myself wondering whether anything was happening when, in fact, nothing was happening. I'd like to see the tools installation option on the console say, "You need to log in now on the guest and install it!"
Running a VMware image from a dd file

I spent a little too much time on this project, but I think it was worth it because of a bunch of interesting lessons learned. Basically, I had a windows 2000 desktop set up that I'd been using for years, with many shortcuts and tools installed, etc. I wanted to run it as a virtual machine rather than start with a fresh install and add all that stuff back in.

Thanks to this blog post on TechRepublic, by Justin Fielding, I had the outline of the steps to set up a Vmware virtual machine using a dd disk image. I found that the data for cylinders, heads, and sectors printed on the back of the HD wasn't the same as what I could get via software. I used losetup to mount the dd file as a loopback device, and fdisk to query the device. Note, you can also use losetup to set up a loopback device, which you can then mount as a partition and pull files from. Cool!

sudo losetup /dev/loop/0 james-backup/hdimage.dd
sudo fdisk /dev/loop/0

The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/loop/0: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/loop/0p1 * 1 510 4096543+ 7 HPFS/NTFS
/dev/loop/0p2 511 4864 34973505 f W95 Ext'd (LBA)
/dev/loop/0p5 511 4864 34973473+ 7 HPFS/NTFS

I plugged this data into hdimage.vmdk (the VMware disk configuration file) as follows:

# Extent description
RW 78156225 FLAT "hdimage.dd" 0

(78156225 = 255 heads * 63 sectors * 4865 cylinders)

ddb.geometry.sectors = "63"
ddb.geometry.heads = "255"
ddb.geometry.cylinders = "4865"

This shows two primary partitions, as expected, and the extended partition.

I ran into a wrinkle where the image would boot but fail. The first series of problems went away when I mounted the Samba share correctly - with the uid of the user needing to access it. I was a little confused by the file permissions on the remote server, vs. the file permissions on the local mount point. Clarity descended when I specified the uid as in the example below. Anything in < > is specific to your situation.

sudo smbmount -o lfs,credentials=/home//.smbpasswd,uid=

Breakdown:
//server/share is the server and file share. Typically a Samba server will be set up to share out a user's home directory if you connect with that user's credentials.

/mountpoint specify where on the local file system you want the server file share to show up. You could create a directory in /mnt, for example, or in /home//samba Then you cd /home//samba, and the remote files are there for you.

-o - specifies that mount command options follow.

lfs - large file support. See below.

credentials - This allows you to specify a file containing the username and password used to connect to the Samba share. Slightly less risky than sticking them directly in /etc/fstab

uid - the local user id you want to own and control the mounted Samba share.

Once I got that all lined up, vmware could open and use all the file locks and temp files it needs. But I still had a problem: the vmware virtual machine would start, but bomb while it booted, leaving me with a core file and some stale lock files. I couldn't vmware.log showed a "caught signal 25" entry each time. According to the man page for signal, Signal 25 is

SIGXFSZ 25,25,31 Core File size limit exceeded (4.2 BSD)

File size limit? Hmmm. The file is on a server, being accessed through Samba on a client. I checked around. On the local machine, ulimit reports "unlimited". Same on the server. In fact, copying the large image up there in the first place through scp didn't give me any problems. My friend Jeff found a parameter for mounting the Samba share, lfs. That solved the signal 25 problem.

Final note: it would be better (faster) to use the iSCSI Enterprise Target to store the vmware image file. Going through a file server is going to be slower than using a SAN. iSCSI lets you get block level access to the remote disk, effectively turning your LAN into a big SCSI cable. I just don't have a server with any unallocated disk space at the moment, but I do have some Samba servers with an oversupply of storage.

Monday, December 11, 2006

Agents - a bad idea

I'm guilty of blog echo here. I'm echoing a post on taosecurity that echos a post on Matasano. Sorry. Sort of.

For a while now I've felt the obvious drawback of adding software to a production device - anything adds complexity, which undermines reliability and security. Whether that additional software adds enough in return is the question.

Matasano examines this credo in detail, with empirical evidence supporting the idea that you need to minimize agents. Agents are used for antivirus, desktop inventory and configuration management (especially for security functions like patch management, firewall and host-based IDS).

Read the Matasano post. Among other problems with agents:
1) Vendors are still in early-mid 1990's mode as far as responding to vulnerability reports. That is, they ignore them.

2) Agents are complex and invasive processes, creating a massive attack surface. This is not a theoretical problem; there's an extensive history of actual vulnerabilities in agent-based tools.

3) This inviting attack surface is present across an enterprise, typically. That is, there's a huge monoculture target waiting. The biodiversity analogy suffers from the same weakness all argument by analogy does, but it's still useful: where there's no genetic diversity, a population, if vulnerabile, is universally vulnerable to an infectious agent.

4) Further, most agents report to centralized servers, which themselves present inviting targets. Own that server, own the enterprise. Yikes.

So what's the alternative? Software vendors develop agents to control and collect data on otherwise unmanagable numbers of machines. There may not be built in mechanisms externally available to take care of this stuff. I think, for the most part, these mechanisms are present in modern OS's. Essentially, the agent and its problems are already provided, so don't add more. Also, I think the pull method, where an agent periodically checks in and requests updates and reports status, is more robust than the push method, where a central server issues directives. In both cases, owning the server is tantamount to control, but there are some beneficial corner cases for the pull method. In the pull method, you could still provide a malicious update on the server that the client's agent would pull down and act on in good faith. But that's a little trickier than the Stalinist, push method. Also, a pull method doesn't create a new, listening service that can be attacked without controlling the server.

I believe cfengine and puppet both follow this approach, but I haven't used either.

Of course, the element of authoritarian direct control is part of the appeal of these systems. So push will probably win over pull due to its superficial appeal.