The Blog of Tom Webster

Chronic Ranter, Reviewer, and Developer. I speak only for myself, opinions are my own.

Cloudbound: Chrome OS Introduction (Part 1)

  2010-12-16 07:03:00 PST

Welcome to Cloudbound! My brand new blog category dealing with all things Chrome OS and CR48. As some of you may know already (full disclosure here), I am now part of Google’s official beta-tester group for Chrome OS and CR48. Before I get into the specifics, let me go over exactly what Chrome OS and CR48 are, just in case you are new to the game.

Chrome OS is a Linux-based operating system made by Google for cloud computing devices. My, oh my, what a buzz word. Cloud computing. What is it exactly? I can guarantee that most of you do some form of cloud computing each and every day.

Cloud computing is using applications and storage on the internet instead of on your local computer. Ever upload a picture to Facebook, Picasa, or Flickr? You’re using the internet to store those pictures, you are using cloud computing! Ever use Google Docs to type up a paper? That’s another example of cloud computing. Ever listen to music on Pandora, watch a television show on Hulu, or upload a video to YouTube? Those are all great examples of cloud-based media services. So why make an operating system based entirely off of web services? Plenty of reasons! The biggest boil down to the following points:

The next post out will concern the hardware and physical feel of the CR48 notebook and my opinions on the design and build quality. Stay tuned!

Facebook's Added Transparency

  2010-10-06 16:51:00 PDT

Today Facebook has announced two major changes that directly combat issues that power users of the site have had since the website’s inception:

  1. The ability to export your data.
  2. The ability organize friends into groups.

The most important of these is the ability to export user data. Every comment, picture, post, and status update, rolled into a ZIP file, given directly to the user. This is a game-changer. Facebook can no longer be called a walled garden. With the ability to take your data elsewhere, users don’t have to feel “stuck” on Facebook. Want to leave and take your data elsewhere? Go to a social network that will allow you to upload your Facebook ZIP file. Just like that, you’ve left Facebook for (hopefully) greener pastures. Personally, I’ll feel safer using Facebook, now that I know my data isn’t forever locked away in the vaults. Facebook was one of the largest bastions of data lock-in, and now that they’ve changed, they’ve set an example that will (and should) be followed throughout the tech community.

While not as important as being able to leave, Facebook has also introduced a new Groups feature. Groups isn’t just a resurrected “Friend Lists”, groups will fundamentally change the way Facebook will operate. What if you want to post a message to your college drinking gang, but leave your family out of it? Before, it was convoluted. Creating the list, managing it, sending a message to the list. Friend lists never really felt like an A-List feature of Facebook. If groups work like they should, this will bring Facebook much closer to allowing users to separate their social networking actions to specific aspects of their lives. Want to create a Thanksgiving Dinner event for just your Family Group without your creepy Facebook-stalker seeing? Go right ahead. Like the title of the GigaOM article: New Facebook Groups Encourage Private Interactions. And it’s true.

But what does this mean for startup projects like Diaspora? Hopefully not too much. The difference between Diaspora and Facebook is Diaspora is still an open project (albeit a very young one) and still has a chance of doing these things better than Facebook. Diaspora was created with the ideas of data-export and social-aspects from the very beginning, Facebook was not. What may very well happen, though, is drive much-needed developers away from the project, simply because they don’t see the need for it anymore. Only time will tell on this last point.

In the tech world, today’s events seem like part of a recent trend: Once super-closed companies are starting to tear down their walls. With Apple allowing a Google Voice and Facebook letting users take their ball and go home, it seems that walled gardens are finally starting to realize that open is not only better, but more profitable as well. At-ease users are happy users.

Kubuntu: The Perfect Middle Ground?

  2010-09-12 20:30:00 PDT

I really really like Kubuntu 10.04. Out of curiosity, I loaded up Kubuntu 10.04 onto my external hard drive to give it a spin. Other than loving the fact that I have my own personal encrypted Linux install bootable at all times on just about any computer I happen to be near, I really really like the new KDE. Take the easy-to-use mentality of standard Ubuntu and mix it with the endless-options power-user mentality of KDE and you get Kubuntu. First impressions went something like this:

Oh! Its the ease of Ubuntu! But wait... all of these options.. all of
these rolled in configuration options, right up front? This is made for
power users! But it is still Ubuntu? I'm confused... and happy

Kubuntu seems like the perfect middle ground for those not yet happy to jump to Fedora, SuSE, or Debian proper. Kubuntu still contains some helpful Ubuntu-centric additions (read: training wheels) for the Linux newcomers, but maintains the features and customizations that power-users crave. To add to the list of things I really like about Kubuntu, it is absolutely beautiful. If I were to pick one Linux distribution to deploy to a mass enterprise environment, it would be Kubuntu. Debian backend, Ubuntu ease-of-use, and KDE power and flexibility. It could possibly be one of the best general office use distributions yet. The only problems I can see with KDE is operating within a current Microsoft-rich environment (and this is more of a general Linux /OpenSource problem than a KDE/Ubuntu problem) is the total lack of Exchange support. Whether it is with Evolution or Kontact, it just kills me when I find amazing open source applications that don’t work with the biggest force in business today. If running a business on open standards and open technologies is interesting to you, consider Kubuntu a powerful choice. I highly recommend Kubuntu for those who are ready for more Linux power in their hands, while not quite ready to take the full plunge.

Server-Bits #9: Public Key Authentication in SSH or Passwords are Boorish

  2010-08-07 08:44:00 PDT

Server-Bits #9: Public Key Authentication in SSH or Passwords are Boorish In this Server-Bits tutorial, I’ll show you a real time-saver when it comes to SSH and anything connected to SSH. To put it simply, public key encryption in SSH is where you don’t need to log into an SSH account because the public key (stored on the server) matches your private key (stored on the client machine), and it logs you into your account. Because anyone with your private key can appear to be you and gain access to your account, it is extremely important to guard your private key with your life. The public key can float around the internet for all time without any danger to yourself, your accounts, or your private key, as public/private key encryption is very secure.

What this all translates to in layman’s terms is that public key encryption allows you to securely log into your server without using a password. If this freaks you out at all, don’t worry, you aren’t alone. But don’t worry, as long as you’re creating keys from your user account and not from root, the risk of huge damaging effects is minimized, as you still need a sudo password to do any serious damage (provided that your server is properly configured). Of course, to take full advantage of Public Key Authentication, you will be using two computers, but all of the key generation and even the copy will be performed client-side (The computer you want to have remote, password-less access).

The first step to generating a public/private key pair is to run this command (which we will break down into detail):

ssh-keygen -b 4096 -t rsa -f ~/.ssh/id_rsa -C "Your Comments"

This is how the command breaks down:

-b: This is the number of bits to be used in the key. This number can be as low as 768, but since we’re running a server, lets be overly-paranoid and use 4096.
-t: The type of algorithm we will be using. In this example, we’ll be using RSA for our key generation, but you also have the choice of DSA, but in that case, you will need to make your key exactly 1024-bit.
-f: This option specifies the file in which the key will be saved.
-C: This allows you to specify a comment to go at the end of each key. This is important because you will most likely have several keys floating around (we will go into the why this is a good idea later) and if you need to void a key, it is very hard to distinguish which key is which.

You can specify a passphrase for each key if you wish. To log into the server with this key, you will need to type in the passphrase before the key will be unlocked. The downside to this is that by using a key, you are trying to move away from having to use a password, but the upside is that you can leave this key on a less-secure network (your office/school network) with greater ease-of-mind. You do have the ability to use a blank passphrase, but that is only recommended for systems you completely trust (your completely encrypted laptop, for instance).

Next, you need to make sure that no one else on the system can read your private key.

chmod -R 600 ~/.ssh/

Now that we have a key on our client machine that is readable by us, we need to pipe it over to our username on the remote server.

cat ~/.ssh/ | ssh 'cat >> ~/.ssh/authorized_keys'

This is how the command breaks down:

cat: This command just spits out any data in a file to the terminal. You can redirect the output to other files, however, which is exactly what we are doing here.
The ssh login bit logs us into the remote server under our remote username, then gives a command to the terminal on that end. Our previous text output from the client machine is being redirected through ‘cat’ server-side, then it is appending the new text to the end of the file located at ‘~/.ssh/authorized_keys’.

Most of the time, when an SSH login is established, a key is looked for first, before the password. How does the system know who is who when using a key? It looks in the ‘authorized_keys’ file. Any public key located in that file is a match, and if is corresponds correctly with the matching private key, you are logged in using public key encryption.

For our server system, we don’t really need any additional configuration, as most Linux distributions (including Ubuntu) automatically accept private keys through SSH. The really cool part about this is that you can now mount remote directories without having to put in a password, making bash scripts endlessly useful. For instance, you could set up a completely encrypted backup system over SSH (I’ll be writing about this later). To learn more about Public Key Cryptography, head over the Wikipedia page on it. Very wonderfully written article.

Thanks to for the key-copy command.

About Server-Bits:

If you’ve ever wanted to get started building a server, right in your own backyard/kitchen/closet/mother’s closet/mother’s basement, then this is the read for you. Aimed at the not-so-technical-but-willing-to-learn, this will give you everything you need to build that monster-server you’ve dreamed of. My goal: To give you a working server, for free, that you can use daily.

The Probable Demise of PastaNet

  2010-08-02 11:31:00 PDT

Attention PastaNet Users [and trusted friends in science]: PastaNet may be going away soon. AT&T doesn’t run their fiber across the street from where I used to live and has no idea when the Uverse service will be coming to my neck of the woods (about 500 feet away). I’m looking for any fast (and hopefully cheap, but I can pay a premium) ISP suggestions/comments/rants so I can pick one that may work for our users. From what it looks like, a best-case scenario (from my current knowledge) would be scaling back user accounts and active logins to a minimum level and having connection speeds hover around 30KBps. If any of you have ideas for ISPs that we could go with, or if you want to donate to Run-A-Fiber-Line-Directly-To-PastaNet Fund, please let me know. Your contributions to the think-tank of ideas and schemes, and your dollars, are greatly appreciated, and will help save PastaNet. As it stands now, running PastaNet off of the Time Warner infrastructure will cripple our operations and slow down everything we are trying to accomplish. I have recently been pricing how much a dedicated fiber run would cost and have put up a PayPal donate button on the right sidebar of this page. Thank you for helping me help you help us all, trusted friends in science.

Page: 23 of 32