Monday, December 16, 2013

Setting Up Full Disk Encrypted LVM on RAID-1 for Ubuntu

For several years now, I have run ZFS on Linux on my home Ubuntu desktop. My friend, Matt, and I have even thrown together some pretty handy scripts for some simple Linux tasks with ZFS, such as deleting sets of snapshots and simplifying backups. It was always a lot of fun to play with, and I always felt a deep sense of loyalty to ZFS, having proudly worked at Sun Microsystems. But, sadly, the romance is over. I found myself regularly running into stability and performance issues with ZFS on Linux. Combined with a deep desire for full disk encryption (which was never open sourced by Oracle, unlike the rest of ZFS), I finally decided to pull the plug.

So what to replace it with? Well, for my personal machine, what I really wanted is reliability, flexibility, and encryption. The first, I figured, would be covered by a mirrored RAID array, although perhaps it's a bit of a stretch to call 2 disk RAID 1 an "array." The second would be covered by LVM, and for the third I'd use the standard Ubuntu LUKS setup.

I quickly learned the good news: the default Ubuntu 13.10 installer installation is a LUKS fully disk encryption setup on LVM! So I started digging around for the RAID option so I could get going. Sadly, it was nowhere to be found, and the Internet confirmed it. Everyone suggested using the Ubuntu server install to get a RAID setup. However, I really don't like messing around to find every package to get the exact right desktop setup; if at all possible, I really wanted to use the standard installer.

So I dug, and dug, and dug, and I couldn't find anything with a proper explanation of how to do a standard installation with RAID. The closest I found was this EncryptedFilesystemLVMHowTo guide. So I spent long time trying to get it all right, and I figured I ought to share the knowledge in case anyone is looking to do the same.

  1. Start by booting the Live CD or USB to the "Try Ubuntu" Desktop. The straight installer is not going to do you any good.
  2. Open a terminal and sudo -i. You'll need this root shell throughout.
  3. In the terminal, run apt-get install mdadm. This is the standard Linux software RAID manager.
  4. Open gparted and create your partitions. I ran into a bit of trouble because my disks are 3TB each. This is a problem because the standard partition table can't handle disks larger than 2TB, so I had to create a GPT partition table.
  5. In my case, because I was using GPT, I needed to create a 1Mb parition at the start of both disks and turn on the "bios_grub" flag.
  6. I then made two more partitions on each disk: a 256Mb partition which will be the boot partition, and the rest of the disk which will be the raid array. Thus, in my setup, the final partitioned devices were:
    /dev/sda1 /dev/sda2 /dev/sda3/dev/sdb1 /dev/sdb2 /dev/sdb3
    The devices that end in 2, I formatted as ext4. The other devices I left "unformatted," but you can format them however you like as they'll be overwritten shortly anyway.
  7. Next I created the RAID array my running in the shell:mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
    This should start up a new device, /dev/md0, automatically.
  8. Next, I created the encrypted LUKS device. In this particular case, the Ubuntu GUI works quite well, so I used the "Disks" utility provided by ubuntu, selected the md0 device, selected "format," and chose the "encrypted + ext4" option. Enter in your password of choice, and click "OK."
  9. Now you should have a new encrypted device whose unencrypted version is mapped inside /dev/mapper. It will probably be a long, complicated ID, but let's call it /dev/mapper/luks-dev for this tutorial.
  10. Now to create the LVM partitions. I wanted 3 LVM partitions: 1 for swap, 1 for my root install, and 1 for my home directory. Part of the beauty of LVM, though, is that you can change these up later. I set them up with the following:
    1. pvcreate /dev/mapper/luks-dev
    2. vgcreate ubuntu /dev/mapper/luks-dev
    3. lvcreate -L 20G swap ubuntu
    4. lvcreate -L 300G root ubuntu
    5. lvcreate -L 2.5T home ubuntu
  11. After the previous step, you should now have the /dev/mapper/ubuntu-swap, /dev/mapper/ubuntu-root, and /dev/mapper/ubuntu-home devices. Now start the Ubuntu installer, following the onscreen instructions until you get to the partition screen.
  12. Select "other" for the partitioning method. You will have to tell the installer where you want everything to go.
  13. Select your swap, root, and home partitions, telling the installer to format swap as a swap partition and root and home as ext4, setting their mount points to / and /home, respectively. Additionally, select /dev/sda2, setting the mount point to /boot.
  14. Finally, click "install" and follow the on screen instructions until it has completely finished.
  15. Almost there, but not quite. Unfortunately, the installer didn't know that you were installing on top of RAID, LUKS, or LVM, so you're going to have to manually update some of the installation yourself. First things first, you need to mount your new installation and chroot into it. Run the following to get that setup:
    1. mount /dev/mapper/ubuntu-root /mnt
    2. mount /dev/mapper/ubuntu-home /mnt/home
    3. mount /dev/sda2 /mnt/boot
    4. mount --bind /dev /mnt/dev
    5. mount --bind /sys /mnt/sys
    6. mount --bind /proc /mnt/proc
    7. mount --bind /etc/resolv.conf /mnt/etc/resolv.conf
    8. chroot /mnt
  16. You should now be in a chroot of your new install. Run apt-get install mdadm initramfs-tools to make sure you have RAID and initramfs setup tools installed on your system.
  17. Next, edit /etc/crypttab in your text editor of choice. It may not exist yet, but that's okay. Add the following line:
    luks-dev   /dev/md0   none   luks,retry=1,lvm=ubuntu
    where luks-dev is the name of your encrypted device that we're calling /dev/mapper/luks-dev.
  18. Next, you need to update your initramfs so the system knows how to boot, then install it on your grub device so the system can find it. Do this by running the following:
    1. update-initramfs -k all -c
    2. update-grub
    3. grub-install /dev/sda
  19. Finally, you probably want to copy your grub partition to your second RAID device so you can theoretically boot from either disk (in practice, these will get out of sync, so if your main boot device fails, you'll probably have to boot into a Live CD/USB and reinstall grub to the second disk). To do this copy, you'll need to first load gparted to get the starting sector of /dev/sda3 (let's pretend that number is 50000), and subtracting 1. Then run dd if=/dev/sda of=/dev/sdb --count=49999.
  20. Now reboot, and you should be good to go!

Friday, August 2, 2013

Suborigins for Privilege Separation in Web Applications

One area of browser development that I have wanted to work on for a while now is privilege separation in web applications. In particular, I've always wanted a mechanism to help developers write truly modular applications, where the different parts are strictly separated by the browser. There are some very cool tools an ideas for how to do this within an application itself (my personal favorites being in Devdatta Akhawe's work on data confinement and least privilege in the web applications), but I would feel a heck of a lot better with a browser mechanism for this.

What are some examples of this? Well, often, there are many web applications that share a single real origin. Perhaps the most obvious example of this is my employer, Google, whose main origin, www.google.com hosts many functionally separate applications (in the hundreds), but share an origin for reasons relating to branding, DNS latency, and seamless interoperability, despite being very different properties. This has an unfortunate side effect, though, that one compromised web application from an XSS at the www.google.com origin means that all properties at www.google.com are compromised, and nobody wants that. There are a lot of popular web applications that face similar dilemmas, such as Facebook, Twitter, Dropbox, etc.

Browsers have several mechanisms these days that start to address this compartmentalization problem, but each has its own problem. Sandboxed iframes are a mechanism for containing totally untrusted data, but their synthetic origins make them, by design, very difficult to communicate with. Content Security Policy (CSP), which I'm a big fan of, is a great system for eliminating cross-site scripting (XSS) attacks (among other things), but is generally incompatible with current website design. Additionally, using CSP requires complete compliance across the entire origin, so if you have a lot of web applications at the same origin, and any one of them does not use CSP, you're entire origin is at risk. We really want something that would allow us to, among other things, adopt CSP piecemeal, one web application at a time, despite remaining at the same physical origin.

In my first several weeks at Google, I spoke to Michal Zalewski and Adam Barth about an idea of Michal's to provide exactly this kind of separation. We're calling this idea Per-page Suborigins, and you can see a detailed proposal here. Our objective is to provide a new mechanism for allowing sites to easily separate their content into separate, flexible synthetic origins, that are transparent to users, while still serving content from a single physical origin. Furthermore, the synthetic origins should be predictable and convey the full physical origin so that compartmentalized content can easily use current browser technologies, such as postMessage, to interact with each other.
I'll elide most of the details in this post and leave them them for you to explore in the aforementioned detailed proposal. I really wanted to throw this idea out to the public as soon as possible to get as many ideas and as much feedback as possible, so I'll just limit myself to a basic overview in this post.

Overview

As mentioned earlier, many web applications share a single real origin for a variety of reasons, most of them pragmatic. In a sense, this is an accidental byproduct of the Same Origin Policy (SOP) in the sense that the SOP assumes everything at a given real origin is part of the same application. So our goal is to create a primitive that would allow developers to apply the SOP at a finer grained granularity and specify that different applications within the same real origin should, in fact, be treated as different origins under the SOP. This would fill a space between Sandboxed Frames and CSP by allowing consumers to separate trusted components into separated origins while still allowing efficient cross-origin communication via postMessage and CORS, but also without significant retrofitting limitations to legacy applications.

In terms of actual use, there are three different use cases that we are aiming for:
  1. Separating distinct applications that happen to be served from the same domain, but do not need to extensively interact with other content. Examples include marketing campaigns, simple search UIs, and so on. This use requires very little engineering effort and faces very few constraints; the applications may use XMLHttpRequest and postMessage to communicate with their host domain as required.
  2. Allowing for modularity within a larger web application by splitting the functional components into different suborigins. For example, Gmail might put the contacts widget, settings tab, and HTML message views in separate Per-page Suborigins. Such deployments may require relatively modest refactorings to switch to postMessage and CORS where direct DOM access and same-origin XMLHttpRequest are currently used, but we believe doing so is considerably easier than retrofitting CSP onto arbitrary code bases and can be done very incrementally.
  3. Similar to (2), applications with many users can split information relating to different users into their own suborigin. For example, Twitter might put each user profile into a unique suborigin so that an XSS within one profile cannot be used to immediately infect other users or read their personal messages stored within the account.
Our proposal for Per-page Suborigns is an attempt to address all three of these uses. It allows servers to provide the browser in an HTTP response with a suborigin name. The browser then treats this as another part of the origin in SOP checks. That is, instead of just comparing scheme, host, and port, the browser will also verify that the two origins have the same suborigin name. Because these suborigins can only be created by the server in a new execution context (i.e. a new frame), in some ways you can think of this as a named Sandboxed Frame.

(Some) Details

Before we get any further, let's define a suborigin and how it interacts with the SOP. The good news is, it's pretty straightforward. A suborigin is a synthetic origin defined by an HTTP header such as:
   Suborigin: <name>
An alternative to this that has some nice properties is to define it as a CSP directive. For example:
   Content-Security-Policy: suborigin <name>;
Suborigins are a separate field from traditional origins that must be checked whenever the SOP is enforced. Thus, the above suborigin will result in the following SOP origin for its frame:
   origin:    <protocol>://<host>:<port>
   suborigin: <name>
where <protocol>, <host>, and <port> remain as they always have, and <name> is a name defined by the server in the HTTP header.

One of the key properties here is that there is no way to "surrender" or "escape" a suborigin. That is, once an execution context is created, its suborigin cannot be modified (other than destroyed when the execution context ends). Similarly, there is no way to enter a suborigin context other than by a directive from a server in an HTTP response. So, there's no way to point a resource identifier towards a suborigin, for example, because we require that the server be the authoritative source in creating suborigins.

We've considered alternate ways of encoding suborigins, but rejected them for a number of reasons. The design document outlines a bunch of them, so I'll leave that for you to check out on your own time.

There are a bunch of restrictions on suborigins as well that give them interesting properties. The first thing of note is that permissions that are normally associated with origins should not be applied to suborigins. Geolocation and fullscreen permissions, for example, should be segregated from the main origin and a suborigin. Moreover, there should be no way for suborigins to obtain such permissions. This is because we want to avoid, at all costs, users having to be aware of or interact with suborigins in any way. All I can see coming from that is confusion1.

On top of this, we require that suborigins do not have access to document.cookie. This prevents information leaks and limits the damage that compromised frames can do.

The key to Suborigns, however, is that they can use postMessage and listen to message events, allowing them to communicate freely with the main origin. This allows the main origin to act as a type of central manager, doling out privileged information to suborigins at need. Thus, if a suborigin needs to restrict a portion of document.cookie, for example, it can send a message to the main origin which can choose to give (or not to give) that information to the suborigin as needed. This has the potential to help implement a notion of least privilege to help mitigate attacks on a suborigin. See this paper and it's associated Github project (also referenced at the beginning of this post) on design ideas for leveraging least privilege in web applications.

Conclusion

I've left out a bunch of other concerns and design points in this post, but the summary is pretty straightforward. Suborigins look an awfully lot like a named Sandboxed Frame. They would provide a mechanism for creating privilege separation within a web application and between web applications that share an origin, while still allowing them to communicate.

Let us know what you think! Leave comments here, or on the design doc wiki page, or even feel free to shoot me an email (my Chromium account is probably the best place to reach me: jww@chromium.org). This is all in the early stages, and while I'm starting work on an implementation, by no means is anything set in stone.

Finally, a special shout out goes to Devdatta Akhawe who has been extremely helpful in these early designs.


1 I can see a future in which we might allow some less security sensitive permissions to be obtained by suborigins, but it's always easier to grant those at a later time than to take them away.

Thursday, May 23, 2013

Tip of the tree Blink now has CSP 1.1 script nonce support! And what the hell does that mean?

As a new Googler, things can be slow going. There's a lot to pick up here. New tools to learn, new systems to navigate, new people to interact with, and, of course, new code to explore and understand. So, naturally, I was quite excited when I made what was really my first substantial contribution to Blink (the rendering engine underneath Chromium that recently replaced WebKit), and I made the following Tweet about it:


This left a lot people confused. Clearly I was excited about it, but why? So I thought I'd clarify here.

For those of you who don't know, Content Security Policy (CSP) is a browser mechanism for helping to eliminate cross-site scripting (XSS) vulnerabilities in web sites. Now, there are a variety of caveats to that statement, namely that CSP requires developers to code in a more limited style, requires the server to send specific headers, and ultimately still allows developers to shoot themselves in the foot. But, for now, let's assume it makes the world a better place if used1.

One of the limitations of CSP that needs to be enforced for it to effectively block XSS attacks is the requirement that the page contains absolutely no inline scripts. This means absolutely no script tags with code, e.g. <script>alert('foobar');</script>. It also means no sneaky inline handlers or javascript: protocol URLs, e.g. <button onclick="alert('foobar');">click me!</button> or <img src="javascript:alert('foobar');"></img>. To be clear, this is part of the policy that CSP enforces. That is, you can try to insert those on your page, but CSP will block those inline scripts from executing, which is a lot of where the anti-XSS magic from CSP lies.

Why this requirement? Well, one of the insights of the creators of CSP is that the way XSS often happens is when a web server puts untrusted content on a page, and sanitizes the content incorrectly (or does not sanitize it at all). Then, even though the content is not expected to contain a script, the bad guy inserts a script which is served up to unsuspecting users. So CSP prevents these scripts for executing, and therefore if a bad guy tries to insert them, they simply will not execute.

By the same token, CSP allows external scripts to be loaded (i.e. <script src="some/path/here.html"></script>), but only from an explicitly chosen set of servers. Thus, a bad guy could insert a script tag that loads a script from a URL, but the he would be limited to only a small set of servers, ostensibly controlled by the developer. So getting an XSS attack this way will be much, much more difficult.

But the ban on inline scripts is a big limitation that, for many developers, will be a huge change. Although it's not impossible (hey, Twitter did it), the process might be very painful. For example, take a third party widget that you want to include on your site that relies  on inserting an inline script into your page. It's possible that it might work off the bat just by putting it in an external script, but there's a good chance it will require a lot of hacks, if you can get it working at all. At the same time, perhaps this widget requires on completely static, known JavaScript, so we're not worried about a bad guy putting anything into the script itself.

Enter the idea of a script whitelist. The creators of the latest version of CSP, CSP 1.1, realized that we don't want to ban all inline scripts; we want to ban all unknown inline scripts. But how do we specify known, whitelisted scripts? This is where  nonces come into play.

A nonce is a use-once, randomly generated number, that is unforgeable. Practically speaking, that just means it's a big, random number. What CSP 1.1 introduces is the ability for a server to list a set of newly generated nonces every time a page is loaded that can be used to whitelist scripts. Then, when the page is loaded, the developer may include inline scripts, but only if she specifies the script with a valid nonce. Say the server specifies with the page that 9253884 is a valid nonce. CSP 1.1 allows a developer to write the following:
<script nonce="9253884">
alert('foobar');
</script>
The clever part here is that the inline script specifies a secret that only the good guy could know. That is, a bad guy could still try to insert <script> tags on the page, but because the nonce is not guessable, his script will be rejected by the browser. And there you have it: the benefits of CSP while allowing inline scripts. There are actually a couple other uses of CSP 1.1 nonces, but I'll leave them out here because this is definitely the main use.

In short, while Chrome has had CSP for a while now, we're trying to get up to the CSP 1.1 spec so that we can provide developers and users with even more awesome security and usability benefits. The implementation of this nonce support in the latest development version of Blink is just a small part in a much bigger picture, but I hope it will go a long way in helping developers and spreading the adoption of CSP.

Clarification: I skipped over this, but was quickly called out on Twitter. Script nonces can additionally be used to include external scripts that are not from whitelisted sources. That is, CSP requires that all external scripts are only loaded from a set of whitelisted servers. However, if you specify a valid nonce, you can bypass this requirement.

Update on 2013-07-23: Because the nonce spec is not quite settled, we've moved it behind a run time experimental flag for Content Security Policy features. You can still test it out, but you just need to be sure to run Chrome with that flag. You can enabled going to chrome://flags and selecting "enable" beneath "Enable experimental Web Platform features."


1 In the past, I've actually done some research on the topic of CSP limitations, and there are a bunch. On the whole, though, I definitely believe it to be a boon for the Web. For the ultimate skeptic turned believer, see Adam Barth's blog post on CSP.

Thursday, April 25, 2013

Creating a new GNOME screen lock button (or any other applicationbutton)

The reasons are not super important, but I want a screen lock button in my panel. I happen to be using lxpanel, although this should work for adding a screen lock button to the menu of any GNOME system. I could have sworn this existed in the past, but I could not find head nor tail of it anymore. In any case, what I really want is to add a menu item to GNOME. Much of what I did was shameless taken from http://forum.lxde.org/viewtopic.php?f=8&t=31300. What I did is as follows:

  1. Open up a new file /usr/share/applications/screenlock.desktop. Obviously, you can name this whatever you want, and you will probably want to name it appropriately for whatever application you are adding to your menu, although it must end in .desktop.

  2. In this new .desktop file, add the following:
    [Desktop Entry]
    Name=ScreenLock
    Comment=Lock your screen
    Icon=system-lock-screen
    Exec=gnome-screensaver-command --lock
    NoDisplay=false
    Type=Application
    Categories=Settings;DesktopSettings
    
  3. Obviously, change the name, comment, and icon for whatever you want to add. Most importantly, make sure to update Exec to whatever command it is you actually want to execute. You can change the categories, too, although I have not explored them much and don't really know what categories exist.

  4. Now you should have a new menu item. If you have a GNOME menu accessible, you should be able go to Menu -> Preferences and see the new ScreenLock entry. However, what I want is to add it to my panel. I'll describe this for lxpanel, but I believe it should be quite similar for the GNOME 3 panel. Start by right clicking on your panel and going to "Panel Settings."

  5. Click on Panel Applets -> Add and select Application Launch Bar.

  6. From here, you can select the menu item to execute. In this case, I selected Preferences -> ScreenLock and then clicked Add.

That's about it; you should have a screen lock button in your panel now.

Monday, December 31, 2012

Running commands on resume on a Linux laptop

I recently installed Ubuntu 12.04 on a MacBook Air. There are some great instructions on how to do this on the Ubuntu wiki and everything basically went smoothly (of course, my desire to install ZFS on Linux added many complications). However, I quickly ran into a problem that I've experienced on numerous Linux laptops in the past: how to middle click.

Of course, on a normal Linux machine, middle click is an invaluable copy and paste tool. Unfortunately, modern laptops don't have any buttons, much less a middle click. Generally speaking, button clicks are simulated via multitouch capabilities of trackpads, and they do not, by default simulate middle clicks. Fortunately, it turns out that if you're using the synaptic trackpad driver (the default if you follow the instructions on the MacBook Air install wiki page), there's an easy command to turn on middle click simulation with a three-fingered click:

synclient TapButton3=2 ClickFinger3=2 PalmDetect=1

Great! Works perfectly! Except, it resets every time you suspend and resume your laptop. Apparently, when you resume from a suspend, the trackpad disappears and then reappears so the options to the synaptic driver are reset. Very frustrating. This brings us to the second problem that I've run into in the past: how to run a script on resume, which seems to be the only way to reset these settings.

How to do this varies from setup to setup, but in my case (Ubuntu on a Macbook Air), it seems that the Gnome Power Management (pm) module controls these things. In fact, it turns out that one can add a new script to run in the /etc/pm/sleep.d directory that will get run on suspend and resume.

Unfortunately, solving our particular problem is not as straightforward as we'd like. When we resume, you have to (a) wait for the X server to start up again, and (b) select what display you want to do this on. It took a while, but I was able to find a good suggestion on how to do this on a web form that you can find here.

I created the following script that solves my problems. Don't forget, you'll also need to run the command on login, but that generally is much easier. Of course, you can pretty easily generalize this script to run just about anything you need on resume.
#!/bin/sh
#/etc/pm/sleep.d/01_middle_click
resume_middle_click()
{
 echo "updating middle click..."
 sleep 5
 DISPLAY=:0.0 su jww -c "synclient TapButton3=2 ClickFinger3=2 PalmDetect=1"
 echo "middle click update succeeded!"
}
case "${1}" in
 resume|thaw) resume_middle_click & ;;
esac