Category Archives: Digital Freedom

A Social Network For Freedom

In this post I will discuss what I believe needs to be the next evolution in being social on the Internet: a social network for freedom. But first, lets review where we are and why it’s bad.

What’s Wrong With Current Social Networks

Websites like Twitter, Facebook, and Google+ have triggered in a leap in capabilities for being social on the Internet, but in one very important way they go against everything that has made the Internet successful — that is they are centralized and closed systems. They have an illusion of openness because they have public web services APIs. But when you are a user of those websites, everything you put into them belongs to them. In a limited way you can get it back out by browsing their website, using their API, or in some cases download some of your data. But they are still in very much in control over your information and you are very limited in how you get your information back out. You also give them the ability, although arguably not the right, to use your information for their benefit such as selling it to advertisers. They also have the ability to hand your data over to 3rd parties, censor what you say or report it to the government [1].

Besides the problem of privacy in how they can share your information without your knowledge or approval, there is a more ruinous compromise to using those websites. That is that you can only be social with people who use those websites if you yourself use the same website. And there is no freedom of choice in who you decided to trust with your information. This is a form of censorship. With Email and the Web, you can host your information on any server connected to the Internet. There are a common sets of rules and methods (i.e. open protocols) for servers connected to the Internet to transfer Email and Web content between each other. Imagine if you signed up for an email account on GMail but could only email people who also had a GMail account and you couldn’t email anyone with a Yahoo!, Hotmail, or any other email account.

Even though GMail hosts your email and is susceptible to taking advantage of your data for their own purposes, which GMail does by using content in your email to select more relevant advertising, you are free to not use GMail. You can sign up with another service or run your own email server and can still exchange email with other users of GMail.

Closed social networks also limit what other companies can do to innovate using the data contained within those networks. Google was possible because web servers were open to the Google web crawlers so they could find, index, and expose the wealth of information available on the Internet. New companies are limited in how much information they can access within the current social networks. There are obvious limitations to accessing private data and it is important people understand how their data can be used. So many people choose to share so much information and we haven’t even begun to see what is possible with how you can extract, combine, analyze, and create new forms of data.

A great quote from John Gilmore applies here: “The Net interprets censorship as damage and routes around it.” [2] Even though Gilmore was talking about censorship I believe anyone who understands the Internet would interpret many aspects of Twitter, Facebook, and Google+ as damage to the Internet. How can the Internet route around these websites? Through a social network for freedom — an open and decentralized social network which respects users’ freedom and at least maintains, if not improves upon, the features we all love about current social networks.

How We Can Create A Social Network For Freedom

I should admit that I am certainly not the first person to come to this conclusion or come up with these ideas. Diaspora*, for example, is an open source project aimed at creating an open and distributed social network. It seems to have a lot of people and momentum behind it. I fairly recently discovered it and haven’t had time to look into it in detail. But it is definitely something that sounds promising.

Before I get into how a social network for freedom should work, I’d like to review a couple of important aspects of current social networks that have made them so huge. The first is the new types of content they allowed you to create and consume. There are two main types of content you create on a social network: persistent content and transient content. Persistent content is information about you that for the most part doesn’t change like your name, picture, location, interests, etc. It is your profile. Transient content is information you create and accumulate over time, i.e. posts or status updates. Further, there are three major types of transient content, each growing in size and occurring less frequently. The first and most common is short little blurbs that are quick and easy to create like a tweet or typical Facebook post. The second is a more thoughtful, slightly longer post that won’t fit in a tweet but sometimes shows up on Facebook, though less frequently. The third and last is more like an essay. I don’t often create longer-form content but every so often something builds up to that point and I want to be able to share it with people. Whatever new social network we create should handle (1) easy creation and management of content of these types, (2) appropriate presentation of the content to match its qualities so nothing is lost and the people you intend actually get to see it.

The other important aspect of current social networks I wanted to mention is being able to control who sees what content you create. Some content you want to be able to share publicly. Some content you only want to share with a select group of people. And some content you only want to share with one person. This is probably the most important breakthrough which sets modern social networks apart from previous technologies like personal homepages and blogs. It significantly contributed to making people comfortable with and want to share more and more information with each other on the Web.

These are the core features which will be critical to creating a social network for freedom:

  • Open communication protocol — Simple, well defined, and secure protocol for communicating with and between servers.
  • Distributed and decentralized architecture — Anyone can run a server and anyone on any server can share and communicate with each other. Each server, or set of servers working together, services a domain in the social network. My server would service all users @joemonti.org. Servers/domains can also be public or private. This would allow an organization to run a private version only accessible to their employees.
  • Asynchronous permissions — A model similar to Twitter or Google+ where you can follow me but I don’t have to follow you. And can chose to require that I approve your ability to access parts of my information but you do not necessarily have to approve my ability to see yours. This isn’t how Facebook works. In Facebook we both have to say we are friends with each other before we can see each others information.
  • Sharing control — Ability to share content either publicly, to a subset of pre-approved users (groups, circles, etc), or to an individual.
  • User discovery — A social network isn’t very interesting without friends, and you can’t have friends if you can’t find them. This is as important for finding people you already know as it is for finding people you don’t yet know but with whom you share common relationships or interests. It also has to work across domains. There will need to be a way for websites to create searchable directories of public profiles across domains as well as limited private profile access given proper permissions.
  • Search and data sharing — There needs to be a mechanism for public and authorized data to be exposed in a way that is accessible by third parties. In a distributed environment, there needs to be a way for external services to provide value to data distributed across multiple domains. This will allow search engines to index content across the network as well as foster innovation in new ways of using social data.
  • Applications — We can’t invent all capabilities possible in a social network alone, so there must be an easy way for application developers to create new capabilities and applications that are integrated with the social network.
  • Embeddable in the Web — You should just as easily be able like/share content from around the web on Facebook/etc by embedding social widgets on any website. There are some cross domain issues here, but it is a necessary feature.
  • Export and Transfer Data — You must own your data. For for that you must be able to export your data, save it locally, and transfer it between domains. 

There are many more important features, but those I believe are the most important.

Challenges In Creating New Social Network

There are a lot of challenges and obstacles to overcome in not only building a new social network but in building an open and decentralized social network. The biggest challenge is actually getting people to use it. Someone can create the best social network in the universe with the best and most features, but if your friends aren’t using it or you can’t find new friends on it, then you have no incentive to participate. There has to be activity. There has to be people using it so there is interesting content to see and so that you know people are going to see content you create. Creating a new social network is risky, but perhaps between a grassroots effort or big players getting involved it may take off. A new open social network may be able to succeed when a new closed social network would not.

Another major challenge will be around security, privacy, and spam. These are critical issues to address and are among the most important to users. Being an open source project is a double edged sword here. On one side, security vulnerabilities are easier for the community to find and fix before they are exploited than closed systems. On the other side, an open source-based system is much more well documented which helps attackers find exploits. The net effect, however, is that an open source project can be more secure. One thing that people need to realize is that an open and distributed social network is very much like email with regards to security, privacy and spam. The good thing is that for email they are at acceptable levels for most users. So I believe these issues can be addressed, although they will take significant effort.

To address these issues and to build a project that succeeds, here are a few ideas that will be helpful:

  • Start with designing the protocol specification.
  • Provide reference implementation as open source project. This will help test and validate the protocol specification as well as provide at least a starting point for people to use the service and run their own server.
  • Use SSL to protect data between domains and shared-key message digests to validate message contents and authenticity for users. Encrypt personal data stored on disk.
  • Ability to integrate with other social networks that don’t follow same protocol. Whether we like it or not, people will still want to use other social networks. A new social network couldn’t survive without integrating with others like Twitter, Facebook, and Google+. 
  • Build for scalability. While there may initially only be a few users and it may never grow beyond that, if the network does grow the platform must be able to handle it. If it can’t handle growing traffic, it may miss its only opportunity to be accepted by and used by a widespread audience.
  • Decouple back-end (core service) from front-end (user interface) to make it easy for administrators to customize the user experience. Building custom front-ends may be more important to more people than we realize and this will help foster front-end innovation.
  • Make it easy for server administrators to upgrade to new versions. Along the same lines, don’t break old implementations with new versions of the protocol or reference implementation.
  • Encourage others to create their own implementation of the protocol using different programming languages, databases, and other technologies.
  • Core project team can host reference implementation and charge users minimal fee (ex: $2/yr), giving users an easy to use and ad-free experience. Fee will support service and development.

Conclusion

There are of course so many more details to discuss, but that is a thorough overview of why I think we need a social network for freedom and how I think we can start building it. Unfortunately I’m not in a position now to have time to build it, but hopefully in time the community will begin to see the benefits of this approach to a social network and enough of them will be developers who can build it.

Share

My take on Ubuntu Unity

With the release of Ubuntu 11.04 (Natty Narwhal), Canonical changed the default user interface from Gnome to Ubuntu Unity. Being a long time Gnome fan and advocate, I admit I was initially surprised and a little upset with the idea. I was really hoping to see Gnome Shell (Gnome3) in Natty, but I guess they had their reasons not to include it and also to use a completely different desktop. Regardless, I couldn’t help but be interested in Unity. It is Gnome Shell -ish and I am a fan of change, as long as it improves my experience and doesn’t get in the way of what I am doing or over-simplify things.

Ubuntu Unity is really designed for netbooks — where screen real estate is at a premium. Unity drops the bottom task bar and replaces it with a dock/launcher on the left of the screen — basically the OS X dock. The top bar also gets a more OS X-y style, with an app menu/finder button, main menu of the currently focussed application, and notification/calendar/date/time area. It doesn’t really feel too much like an OS X clone, but it has a lot of similarities.

I have been using Unity for about three weeks exclusively at work and at home. Today I had to switch back to “Ubuntu Classic” i.e. stock Gnome 2 on my work PC, and here is why…

  • No panel applets
    • I wrote a little applet that gives you a menu that lists our PostgreSQL servers (15+), which opens a new Gnome Terminal and psql’s in to the right server with the right settings. Its really handy for my job and I really missed it and didn’t want to re-implement it.
    • The cpu/memory/disk usage applets help me keep a stable system especially when my software is misbehaving.
  • Grouping all windows in a single application makes it really hard to quickly jump to the right window. It makes maneuvering between windows at the level of multitasking I use more painful. With dual monitors it’s even worse.

I still may keep Unity on my home laptop because I don’t do as much multitasking and do find the layout more useful for home/casual use.

There are a few more annoyances in Unity that I ended up finding workarounds for:

  • It was not easy to configure the Unity dock/launcher. You can use CompizConfig Settings Manager under the Ubuntu Unity Plugin
  • By default, only select notifications show up in the notification area. This is broken. Here is how to fix it.
  • Gnome Do is much faster at launching applications and I have really grown to like and depend on it. You can fix it by changing the “key to show the launcher” in CompizConfig to something like <Shift><Super> instead of the default <Super>. The default <Super> keybinding overrides any keybinding using the <Super> key, thus <Super><Space> for Gnome Do.

Overall, I like Ubuntu Unity, but it over-simplifies the desktop too much for high-demand use cases.

Share

A short KDE4 review from a long-time loyal Gnome user

This week I spent a short amount of time in KDE4. I have never been a regular KDE user, always hardcore Gnome, but every once in a while I like to check in with KDE to see what’s going on. It was really easy to setup in Ubuntu, just “sudo apt-get install kubuntu-desktop”.

Here is what I liked:

  • Awesome eye candy — It looks stunning.
  • Plasma — The new Plasma stuff does more than just look good, it seems like a great architecture for Desktop components.
  • Main Menu — Good design, intuitive.

Here are the reasons which sent me back to Gnome:

  • The biggest reason was that in general I am more used to the Gnome world.
  • I don’t like the KDE software suite that much, i.e. I prefer (or am more used to) the applications in Gnome like Gnome Terminal, Pidgin, etc.
  • The main menu was difficult to navigate backwards from sub-menus.
  • I love Gnome Do.
  • I am weirded out how all KDE apps are called KSomething.
  • I have a gnome panel applet that I wrote and use at work all the time. It lets me quickly jump to an sql terminal to a database. It would be a good test for writing a Plasma widget, but I didn’t have time.
So, in general, I like the direction KDE is going. They’re really pushing the technology with things like Plasma. I may check KDE4 out again for a more long-term stint. But I’m still, and will probably always be, a Gnome guy.
Share

RE: Is the Success of Google’s Android a Threat to Free Software?

In response to Is the Success of Google’s Android a Threat to Free Software?:

The article makes a great point about how Free Software has almost no presence in the Android Market. Android is build on Free Software, but almost no Free Software is built on Android. And how this trend may likely invade our lives and diminish the spread of Free Software.

What I don’t think the article gets right is how to fix it. I don’t think support from the Free Software community is enough to make a meaningful impact. It needs to come from Google as well. I think the primary problem with the lack of Freedom in the Android Market is that Google does not promote Free Software or provide the integration and tools for Free Software on the Android Platform. How can Google not promote with vigor the community that has enabled so much of their technology?

What should Google do? Every free software developer needs web tools to promote and provide access to their project, and this has to be accessible from wherever end users interact with their project. For web tools, I think leveraging Google Code is a good thing. It is a great tool for Free Software developers to manage their project online and provide end users access to the code, documentation, and ability to collaborate. For accessibility, I think source code information (including corresponding license) needs to be available in the Android Market. For instance, when I look at the details of an app, it should list an entry saying how/where to get source and under which license is it.

There are other ways Google can promote Free Software for Android apps, but I think exposing source code information is a necessary start. Next, they can get into promoting apps that are Free Software and offering incentives for developers to release their code. Maybe waive the developer signup fee for Free Software developers (would have to be audited periodically to ensure all uploaded apps are in compliance with the required terms for Free Software).

Share

Freedom to Restrict

I would love to be able to follow the ideals of Free Software pioneered and maintained by Richard Stallman and the Free Software Foundation, but in a practical world I can’t commit 100%. They are great ideals for which to strive, but they are just not practical 100% of the time for 100% of the people. Free Software rights are essential for some users, they are less essential for more users and most are oblivious to them. Although Free Software rights benefit everyone, even those are don’t require or are oblivious to them. This is because everyone can reap the benefits of the few who take advantage of them.

I think I lie in the range of Free Software being “slightly less essential.” I choose to use Free Software wherever possibly, which for me is the majority of my software needs. I have acknowledged that there are some things I like to do with software that can’t, and probably will never fully, be done with Free Software. In my case the primary thing I can’t do with Free Software is gaming. Although it pains the ethics centers of my brain, I have an Xbox 360 and enjoy playing it.

Stallman/FSF seem to focus solely on the rights of the end user, and to impose those values on software developers and distributors. But I have personally come to accept that software developers and distributors have rights too, and those rights include imposing whatever rules they feel fit to impose on their users. But if software developers/distributors wish to restrict the rights of their users, those users have the right, some would say obligation, to reject the software.

My optimistic hope is that users will keep software developers/distributors in-check by calling them out and boycotting their software when unethical and overly restrictive rules are imposed. The more common and effective response of the community, however, has been to create a Free Software alternative. But we shouldn’t overlook the power of a passonate community.

Share

SSH data transfer trick

I’m surprised I haven’t figured this trick out yet, but I was kindof forced to when I got a new hard drive for my laptop, didn’t want to reinstall Linux, and didn’t have a lot of options. The only place I could back it up was on my local server. A usb-attached hard drive would probably be best, but I only had a network, so I needed to get the data to my server. Rsync might have worked, but I expect it would have taken a very long time. The best option would be to transfer a gzip’ed archive but I couldn’t save it locally then scp it. So I had to direct the output of tar/gz directly to the network. I’ve done a lot of things with ssh, but not this. What I found was the way scp handles stdin and stdout.

If you pass a command as a final argument to ssh it will execute the command remotely, but pipe stdin from the local terminal to the remote command and stdout from the remote command to the local terminal. So all I had to was execute a remote command to save stdin to a file on the remote system. This can be done via the command:

$ tar -cz . | ssh user@host "cat > file.tar.gz"

The tar -cz . says “create a gzip compressed archive of the current directory.” The | (pipe) says “take the output of the tar command and pipe it to the ssh command.” The ssh user@host “cat > file.tar.gz” command says “ssh to host as user and execute the command ‘cat > file.tar.gz'”. In cat > file.tar.gz, the cat command is there to properly catch and redirect the output and says “take my stdin and output it to stdout” and the > file.tar.gz says “direct the output of cat to the file file.tar.gz”.

The tar command gets us the gzip’d archive, the ssh command lets us pipe the output to a remote command, cat gives us a command that ssh can execute that takes the command input (the output of the tar command) and output that to a file.

Then once I have the backup and setup the new hard drive in my laptop I can restore the data using the command:

$ ssh user@host "cat file.tar.gz" | tar -xz

This does the same thing in reverse. It sshs to host as user and runs the command cat file.tar.gz which reads the file.tar.gz and outputs it to stdout. Then we capture that output and pipe it through tar, locally, which gunzips/untars the data to the current directory.

I could have also mounted a remote directory on my server using something like nfs, but I didn’t feel like taking the time to set that up.

This is a really neat example of stream manipulation in Linux. Hopefully you can learn something from it.

Note that I did all this from an Ubuntu Live CD so I wasn’t actively using my old hard drive (mounted readonly with mount -o ro /dev/sda3 /mnt/sda3) when I was backing up the data and so I could setup my new hard drive. The only other thing I had to do after I restored the data was to reconfigure grub, /etc/fstab, and /etc/blkid.tab with the new UUID’s for the hard drive. I first had to use /dev/sdaX instead of UUID’s to be able to boot and find the UUID (I couldn’t find the UUID in the Ubuntu 7.10 live cd, I’m guessing because it was a little old, and I also didn’t feel like downloading and burning 8.10). Then I could configure the new UUIDs and reboot and all was good. Let me know if you would like more details on the UUID part.

Share

Avoiding Ruinous Compromises by Richard Stallman

Great piece by Stallman. Everyday challenges and desires inflict non-free software on our lives. It can’t be avoided. Unfortunately, I’m not willing to give up what it takes to be completely free of non-free software, but I do try to make as few compromises as possible. What is even more challenging is the ubiquity of software in our lives beyond the PC, where free software has made minimal inroads. A primary example that is becoming much more relevant today is the mobile phone. Just about everybody has one and none of them can run on completely free software, most of them can’t even run any free software, at least on standard wireless carriers in the US. Google’s Android OS might be just what the free software doctor ordered, so we’ll see where that goes.

read more | digg story

Share

More Publishers Phase Out DRM on Audio Books

“The trend will allow consumers who download audio books to freely transfer these digital files between devices like their computers, iPods and cellphones — and conceivably share them with others. Dropping copying restrictions could also allow a variety of online retailers to start to sell audio book downloads”.

I’m posting this because I made an earlier post about how I started using Audible. I’ve so far purchased and listened to 10 audio books from Audible since June ’07, and have been mostly happy. The only gripe I have with Audible is the DAMN DRM. A big part of reading books is sharing them with friends and family. You can do it with the dead tree versions, but not with Audibles’. It feels like I don’t actually own the book, even though I paid for it. When I tell someone I read some book (that I actually listened to in audio form from Audible), I feel bad because I can’t lend it to them because of the DAMN DRM. I’m being alienated by Audible, and my friends are being alienated by me. It is also a pain to shuffle around my various audio players and computers. At least they let you download books you purchased from their website as much as you want. Hopefully Audible/Amazon will free their audio books from the grasps of that DAMN DRM.

read more | digg story

Share

FSF releases GPLv3

The Free Software Foundation (FSF) today released version 3 of the GNU General Public License (GNU GPL), the world’s most popular free software license.

Read full press release

Share

Tivoization in GPLv3

A while ago I read a thread on the Linux Kernel mailing list where Linus Torvalds and others debated the “tivoization” clause (also considered the DRM clause) in the GPLv3 (I think in section 6). The clause basically states that consumer products containing object code whose source code is covered under the GPLv3 must include all necessary installation information such as authorization keys to be able to modify the GPLv3 source code and run it on the consumer product. The term “tivoization” comes from how Tivo uses GPL code but the hardware restricts modified versions of the software without an authorization key. This is considered DRM. This practice is not exactly limited by GPLv2 and its circumvention is prohibited by the DMCA.

What Linus and others were debating was that they do not agree with the FSF in that tivoization should be restricted by the GPL. They say that we (the free software community) have no right to restrict how hardware manufacturers design their hardware. In their view Tivo is not doing anything wrong. They are using GPLv2 code, but they provide the source for the original and/or modified versions of the GPLv2 code in accordance with GPLv2. Beyond what they do with our software, we cannot control them.

But the tivoization clause is necessary to ensure the freedom and survivability of Free Software. Think if Dell offered Linux (which they do now, but if they didn’t), but to keep people from breaking their systems they added a check in the bios for an authorization key in the linux kernel which was not released, thus preventing anyone from running a modified version of the kernel. Now imagine every computer maker that offered Linux did the same thing. How can we exercise our 4 freedoms if the hardware won’t let us? Yeah, one isolated instance of locking Free Software with DRM doesn’t have much effect on the ability for people in general to exercise our 4 freedoms, but if this tivoization persists what can we do?

The basic idea is that, for me as a developer of Free Software, whether I distribute my code or if I let someone else distribute my code, I want the end user of my software to be able to practically exercise the 4 freedoms of Free Software. If I let Tivo distribute my code and the hardware on which my code is designed to run makes the 4 freedoms pointless (in particular, freedom 1 which permits modifying software to fit the users’ needs), that is wrong. Even though Tivo is not directly violating the 4 freedoms, they are not allowing users to practically exercise all 4 freedoms.

Share