Showing posts with label Internet. Show all posts
Showing posts with label Internet. Show all posts

6/08/2010

All Play and No Workflow Makes Internet Something Something

via @bldgblog, this post on organizing the Internet Reader workflow.

It's an anecdotal account of a designer reviewing how he saves links to read later. Complete with white board wire frame.

You probably do something similar, and so do I, and maybe it is less complicated or more complicated. And we'd probably all love to write a blog post detailing exactly how. Because this is the internet, and we like to share.

And there are a shitload of links.

But here's what I thought of, while I found myself trying to visualize the map he was describing. I thought of, "what the fuck, why am I reading this?"

Here's something I didn't read. It's an article on Time Magazine's website about how there are things out there that figure out what we want. Like Pandora and Facebook. It's funny that there's an article in what used to be my link into American content when I was seven years-old, telling me my content is now being provided by products I've been using for five years.

But not that funny. Because there is a lot of content out there, and this is a SERIOUS PROBLEM.

I don't know how I can play with this duality in this essay in a funny way, so instead of dancing around it, I'll just say it. Yeah, the shit we read on the Internet isn't really world-crucial. But then again, it is. Amid the laughing cats there is the only forum for oil spill news and revolutions and campaigns, and, to use a term from the time of Time Magazine, civics. It's the internet, of course!

So the best guide towards managing this content is a designer's whiteboard and a chorus of sites and services ending in .us and .ly, or a five year-old five years-late Time Magazine?

I work in an industry that is very similar to the Internet. It's called "printing". Printing is a lot of things, but for the purposes of this blog post it is a completely custom manufacturing system. This means, if there is a mistake in manufacturing, you can't have the customer go to the Apple Store and get a completely identical item, because there isn't one. It means you can't go back and fix spelling errors in the content after it is produced. If you fuck up, you are making the whole job again.

This is similar to the Internet not just because most of what the print industry actually prints advertising, but because there is not one Internet experience that is the same. Everyone uses it differently and has a different product. Different content.

You can't run a successful print manufacturing system without a workflow. There are just too many places for mistakes. Between designer, salesperson, estimator, prepress, press, and bindery, there are about five place each for things to get fucked up. Any fuck up costs money, any fuck up past estimator ends up costing material resources. They call it spoilage. Stuff that gets recycled because it's no good to anyone.

So it strikes me as I'm reading this first link, how is there not a workflow for the Internet? It's like we kept a print shop open with nobody working there, and when the customer shows up we show 'em in and say, "help yourself". Try not to get your legs stuck in the rollers.

Of course, there isn't what OSHA likes to call "stored kinetic energy" on the Internet, and the only resource we have to lose is time.

But still, there is no unified approach to Internet content management. No workflow. Just a browser, and a bunch of .us.ly's. The fact that a browser can combine an address bar, a search portal, bookmarks, and maybe even RSS all in one program actually puts it in the front running for workflow.

One of the great parts of the Internet distributed OS-experience is that you can customize. Plug-ins, extensions, javascript. But you have to find these, or hear someone else talk about them, or start picking a bunch of social media chicklets at random, using your email address like a coin in a slot machine. Every workflow ought to have flexibility built in, but still, there is no place to even start.

It's been the prevailing logic that the content provider is responsible for this. There is an awful lot of talk on the Internet about "curation". Like, each website is a museum. How many museums do you go to a year? Every time you need to check a fact, do you run on down to the Smithsonian? Museums are nice experiences, but they are not resources for most people. Some websites do a pretty good job, with a sturdy comment system, and maybe even a little community going on, that leads people to stop into their site directly, to see the dinosaur bones, and the woolly mammoth, and the real Apollo space capsule. I'm thinking of Slashdot, or maybe BoingBoing. But these are still magazines, publishing about things that are going on elsewhere. If the Internet is really supposed to tell us what is going on around the world, and help us connect with other people, it's some sort of crazy open air market, where people are getting pickpocketed, lost amid the meat harvested from unrecognizable creatures, and a sweaty westerner is buying a strange little puzzle box at the cafe in the corner, where it seems they sell a lot of those little puzzle boxes to westerners. Maybe this is the flavela chic stuff that guy is always talking about. What guy? I don't know, something I heard behind a stack of shipping crates.

I'm working on a redesign of my website(s) right now, because on the output end, your system and workflow does affect your content. Without the right tools, it's hard to make anything. But I still don't know what to do about absorbing the content, and helping my content be absorbed by others. I tweet links, comment on blogs, share-and-share-alike, but often it feels like I'm a guy waving a sign at an intersection, or sending ten thousand pieces of junk mail. I wouldn't rather send my stuff off to an editor, and wait six months for them to tell me they lost it (for fuck's sake, thank goodness internet self-publishing is actually rewarding compared to something!) but it seems like there is a hole here, for something not yet invented. I can actually almost taste it. Tastes like a community as easily browseable as Facebook, that incorporates any login, with public, semi-private, and private feeds, with synchable bookmarks, and read-and-comment-republish RSS/Twitter compiling... Hell, if I had any programming chops, I'd build it myself.

Until somebody builds it, I guess it's just "what's your pleasure, sir?"

3/09/2010

Interdome Content-Object Shakeup

Been thinking a lot about Tim's (of Quiet Babylon internet-fame) project, "Unlink Your Feeds".

The problem of multiple, interlinked feeds has long been a burgeoning neurosis of mine.

Let me share my problems with you!!!

I love interconnectivity. I use my Google account with great zest, trying out the new features as they add them. A lot of them are useful, but a lot of them I try to use, simply because having everything linked together makes it easy to experiment. I don't have to invent a new password and username to try X service, I just click the link. If I don't like it, I just stop using it, and never pay it another thought.

Additionally, Google's interconnectedness is a huge plus. There are plenty of portals and application uses between Gmail, Docs, Calendar, and so on, which I don't have to spell out for you. I use my iGoogle page as my widget desktop, and then access my cloud holdings from there. The interconnectedness makes it superior in function even when it lacks particular features, and all of this has made me continue using Blogger rather than going solely to Wordpress, caused me to shun Facebook, and even avoid using Twitter for a long time.

But Twitter... oh, oh Twitter.

It's just too damn easy. All those one-line witticisms I come up with during the day, with no one to share them. The ease of retweeting, rather than saying something original. The ability to take a picture of a strange car and share it with the world, all from my cell phone, while still driving with my knees.

Twitter isn't owned by Google.

So, I have this other feed going. In addition to my RSS feed, and my Reader feed. Now that I know what a wonderful world Twitter is, I get curious as to what the hold-outs who refuse to get Twitter are saying on Facebook. And I think maybe I should start cross-posting to Facebook.

Then they created Buzz, which is the Google-Twitter I always wanted, except that it still sucks, so everyone read on Twitter is still only on Twitter. So I linked my Twitter feed to Buzz, as well as my reader account.

And now I live in this hellish world I have helped create.

Tim is right--we cannot live like this. And while the idea of cutting the feeds free, and using each as each is best suited has a nice, Marxist "to each according to his need" sort of feel, I still have this need for interconnectedness. I just can't blog, feed Reader, Tweet, Buzz, and Facebuke (the verb?) separately and simultaneously. Hell, I'm supposed to be writing books! I still want to maintain my Internet presence, and read what I want to read, so there has to be some over lap.

How, oh, how, can I make all this damn technology work for me?

The secret to using a Google Account, in my opinion, is to be flexible, and also to be patient. Many things work, and over time, Google makes them work better. And, work better together. Also, as I experiment with different tools, I change my use of them as they work, and work differently, together. Only recently did Google Docs get to a point where I was comfortable doing serious writing with it. And, since I spend less time online in "open surf mode" than I used to, I now only bookmark certain feed posts, rather than keep detailed notes about various web sites.

The same is possible with my feeds. I think I've hit upon a way to link them so that people who want to know what the deal is can still find out, without redundancy, and with most of the input automated.

This is the way it is working now:

I am looking at these feeds in terms of objects of communication. Any feed-ready information posted to the web is an object, treated as an individual, accessible, feedable piece. Different classes of objects hold different amounts and kinds of content, which may or may not overlap other objects' capacities for content.

Additionally, certain objects may have a certain shape, which allows them to fit into different data flows or lines of assembly (feeds). For example, I can set up Buzz to attach Twitter-objects to Buzz-objects in my Buzz feed. However, this shape is a shape of intensive flows (in the sense that it allows a certain change-of-state transition of the object, rather than a transitivity, like A = B = C). I'm getting all complicated with my terms, but basically, think of it like how yeast can bake into a loaf of bread, but bread cannot be dissected back into yeast. A Buzz-object cannot be translated into a Twitter-object (at least not yet).

RSS is the smallest common denominator in terms of objects, more or less. Any feed, be it Twitter or Facebook or a blog, almost always has an RSS feed generated with it. There are various tools to mash RSS feeds together, Yahoo Pipes being the most well-known. However, I am doing this because I don't want to create a new feed, I want to merge and ally the feeds of services I already use. Many of these services also have APIs, and perhaps I could work out some sort of program for managing my posts among the feeds. However, I'm not so skillful in the programming department. Also, if I use the recognized features of these different feed services already in existence, I'll probably be in better shape down the line to adjust to new features, and also I have the services' dependability to fall back on.

Content-wise, Twitter is the smallest common denominator. You can't find anything smaller than 140 characters. I also use Twitter most frequently, because of its small content size.

Additionally, many objects translate into Twitter-objects. Via Feedburner, I can make my blog posts echo as Tweets. Using Tweetdeck, my mobile Twitter client, I can post to Facebook and Twitter simultaneously.

Granted, it is annoying to see someone's high score in a video game echoed as a Twitter post. However, I've been posting less and less on the blog, and when I do, it is a often something that started as a Tweet, but then grew too long. In this way, the Twitter-object derived from my blog's RSS feed is just a link to a memo, a "please see my memo re: X". The duplication is, in a sense, to force people to choose. Either they want my full feed, and tune in to the Twitter feed for the details, or if they wish they were spared my constant witticisms, they can just go to the blog RSS.

It is not yet possible to auto-transmute my Reader shared RSS to Twitter, but that's okay, because I can always post a single Reader item to Twitter via a sidebar tool. This is probably for the best, because I can share anything interesting to those who like my reading tastes, but reserve only the "mainstream" articles for my main, common denominator feed.

Buzz can, and in my case is posting all of these things together in one, lump feed. It is a taste of the chaos. But, because Buzz has set out to aggregate all of these things, it is less of a network in itself, with its own flavor, and more of simply an aggregator. The cool part is, it shows up in my Google profile, so if anyone happened upon me via Google, they could get a good taste of what I'm into on the Internet.

Blogger, with all the new Gadgets you can add into it, is looking more and more like MySpace every day. But this flexibility is good for me. I now have two columns here. One to store these text essays, and the other to provide an easy way to examine the difference between my feeds. The Twitter feed, on the top, is the general feed. Next is the blog only, then my Brute Press stuff only, then other-source RSS only. All of it retains its original, unique feed-object character and content, while one feed, the Twitter feed, acts as the main door way, strictly under my control.

Control, after all, is what this is all about. It's a personal micro-manager's Internet dream. The solution I just described for myself will undoubtably not work for anyone else. This probably won't even be my own solution in six months, as I discover new tools, and start using the tools I have differently.

This is the key philosophical lesson of Internet content, content-objects, and object feeds. (You knew there was going to be a philosophical lesson, didn't you?) If you told people in the blog boom that they would be abandoning their blogs for a service where you tapped in 140 characters with your thumbs from your cell phone, they would have thought you were crazy. Same thing if you tried to sell Gutenberg a Kindle, or some other such foolish spatial-temporal technology metaphor-mashup. I'm sure whatever's next will also be just as weird as the present, and the past. Time is aggregation, after all.

The tools we have to communicate change the way we communicate. Big surprise, no? I'm sure we'll eventually figure out our feeds, so we don't have to read cross-posts anymore. Otherwise we'll drown in echoes, or suffocate in Internet Balkanization catatonia. People will sort out how to mean what they want to mean to the people they want to mean it to. Right now, to me, I'm directing my meaning to the people who follow me on Twitter. If you don't like that feed, please consider the other options to the left. If you don't like that, there is plenty of other Internet out there.

This way, both my communications and the people to whom I communicate can grow towards each other and into each other, like roots into the earth.

2/03/2010

Mailing List

Had this vision a couple days ago, and thought I'd share.

I was imaging a time when all relevant advertising is done online, either through sponsored site ads, or generated through your free services, email, and etc. As they get better and better at targeted web ads, either to your particular online identifier characteristics or simply to location and demographic, the idea of targeted direct mail seems pointless.

There's nothing more transitory than a mailing address, these days. Mail can be lost or damaged. People move. Mailing lists accumulate over such a long period of time, that often it's the people themselves that are changing.

But people have their cell phones everywhere. Even if they don't pick up your phone call, they'll probably listen to your message right away.

And email is even better. Machines can read email. Machines tell you what is important and what you should look at, in the days of the wide, wide, atemporal web. If you can appeal to a machine, you're message will definitely be read. In some time period, it might be more important to have a machine read your message than any human.

So, perhaps the mail will be abandoned. People mail letters less, but in the amount of weight going through the mail industry, the real customers to abandon the industry are the mass mailers. Postcards and newsprint are going through the mail by the ton per minute, all in the hopes that maybe you'll see it. Fruitless! It's only a matter of time before all advertising is online.

While this may sound like a death dial tone for the mail industry, it could also be its rejuvenation. As people get bills online, ads online, and news online, they will no longer guard their mailing address as the gateway to their ad-free souls. Already, I readily give out my phone number and guard my email address, just because an email address is so easily passed around, and ads are sent by script. I know someone is unlikely to call me with crap when they can email much more cheaply. They'll email me every damn week. But I can easily screen my calls, almost easier than I can sort through spam. Maybe there is a time when the email address is portal to the individuality, and the mailing address is as casual as a Twitter username.

This could be the rebirth of the postcard--the original technologically truncated global communication. Replacing the @ with the Airmail stamp. Hell, it only costs half an iPhone app to send a PHYSICAL PIECE OF PAPER with writing or whatever on it clear across the continent, if not the world. People could opt-in to mailing lists, where they get weird, semi-promotional musings at irregular intervals. Why? For entertainment? For social networking? For world-wide democracy? Who knows why. Maybe just to bitch about what we're watching on TV.

Sure, no one writes letters anymore. But never has anyone written quite enough letters. How many emails have you received lately that totaled over a hundred coherent, properly spelled words? But wait a minute, people write philosophical essays on blogs! (At least some of us do.) Why do they do it? Because we're crazy. Because for some reason, the human race loves to communicate with people not in the immediate area, but will not make eye contact with strangers. Because people used to make pamphlets and hand them out even though it was against the law. Because people read stuff that ends up on their door step.

Because paper is a really freakin' weird device. Okay, get this--no phone, no 3G, no CAMERA, yet a remarkable resolution, fully-interactive surface over the entire object, and depending on what sort of input device you use, you get completely different results. It has been said that no one writes on this thing in the same way! Although it is easily recyclable, it can be made nearly indestructible, and even if it is totally damaged, it often still works like new. You can fold it, bend it, glue it, tear it, EAT IT, and repurpose it for any number of uses, from building bicycles to spitting it at your little brother. It's really cheap, too.

So maybe, in some hypothetical time period, when everyone is communicating instantly via Device X and Service Y over Network Q, all the really hype kids are mailing each other printed picture postcards of their sex organs, sharing the new slow-net meme, or even sending the track they just recorded with their band, "crimped" to paper via their DIY groove printer.

Maybe.

Here's some stuff the Post Office could be doing to make this time period not just a time period, but a SOON--some of it which I can't believe they don't do now.

- Create unique postal addresses (UPA) for each person in the country. Make it a twenty-four digit number, or some hex code. Nobody has to remember all their friends numbers, or even their own. They can still mail to a so-called "street address", or other such mnemonic. But the mail service looks up the actual client via a reference database, not unlike a DNS database. You can change your mnemonic via the database anytime you want (or "move", as they once called it) but as long as the mnemonic and the physical delivery address are still linked via your UPA, the mail is delivered. I may not be the only Adam Rothstein in town, or the only person ever to live at 4835 SE Sherman Street, but as long as my mnemonic handle, "Adam Master of the Interdome Rothstein" is on the envelope, I still get the mail. Sure, I chose the name when I was sixteen and it's silly, so it's only for personal mail now. Business mail goes to my business handle, "Adam Corporate Jerk Rothstein", which is also connected to my same UPA, and therefore both coming to my home address, even though none of the senders know where I actually live. Or maybe my UPA listing filters mail from certain senders to certain physical addresses. The database handles all of this, and all I have to do is update my record. Did you know that when you fill out a change of address form at the Post Office, you get a postcard to both addresses to confirm the change? It might as well say, "Please click on this link to confirm, and do not reply to this address as this email was auto-generated." They just need to take it a step further, and give you an IP address.

- It gets even easier now that you can print a custom barcode on any piece of mail with your home printer, using the online UPA database, very similar to a DNS whois. In a barcode there's nothing to misspell. The barcode is, naturally, instantly readable to anyone with a camera phone.

- Stop home delivery. Or, charge for it on the receiving end. Businesses pay for mail delivery, of course. Everyone else can do their correspondence by email... via the free Webmail client the USPS now provides, if they like. People with disabilities and the elderly get free mail delivery. Mail can always be picked up at the post office with a private 24-hour box also costing money, but not as much as home delivery. Picking up mail at the window of your local branch is free. Post Office boxes and branch storage has an expiration, of course. After a certain physical amount of mail taking up space, you either have to pay for an upgrade, or it gets "deleted" (recycled into USPS mailer material). Just like your free email account in the distant, limited digital storage past. All the more reason to do important business by email now, because our email inboxes hold just about a terabyte, keeping personal records of bills and other annoying number series for our entire lives, without ever having a potentially compromising personal mailing to shred.

- The frequency of delivery is increased. Once you are paying for home delivery, you can avail yourself of all the different service plan options. You can pay per delivery, perhaps once a week, either prepaid, or with a credit card on net terms. Or pay for unlimited service, up to three times a day. As the quantity of bullshit mail decreases, the speed of service should increase. Especially if you pay for it.

Each of those bullet points contains numerous changes, but all have the same general inclination: the USPS should start re-envisioning itself as a Mail Service Provider, in overlapping silhouette of Internet Service Providers. There are clients with certain but varying hosting needs, physical networks of transmission, and of course, the content to be provided. Landline ISPs provide data packets over a network that has changed a certain amount, but also stayed roughly the same for a while. There is datahosting, packet transmission, and the sale of services. Mobile ISPs are new to the game, and are starting to pick up the product end of things. Comcast will skin you to rent you a cable modem, but AT&T partnered with Apple on the iPhone, which was probably their smartest and most customer friendly move ever. They still have a way to go, obviously, and many more milestones to pass before they are less of a "telecommunications company" and more of a "network access company". But they're starting to get the idea. The USPS has much further to go. The idea that they are delivering "mail to addresses" should go the way of the AOL portal and national news magazines. MSPs are delivering content to customers, and should totally redesign their service and distribution network around this. If my location-aware cell phone can tell I'm at a cafe in New York this week, why the hell is my mail going to my house? They need to make some network choices here. Maybe redesign a standard uni-mailer, into which all correspondence must fit, and is addressed and sold only directly at USPS kiosks, auto-printed with unreadable barcodes that will never be touched by human hands. If it improves service, people might complain, but they'll buy in. Apple knows it. Hell, people still fly on airlines, so they'll put up with whatever is necessary to get from here to there.

But I'm also going to do my part to further this transition to a redesigned, "Post-Net". (great name, no?) I'm starting a mailing list, after the old style, when it was the only way to swap pornography, or read the latest conspiracy theories, or to get the good music, books, and comics. Not for any of those things necessarily, but to send and receive. All you have to do is send me your old fashioned, obsolete format mailing address. If it's your friend's address or your work address, that's fine. If its a PO Box, that's even better. Just somewhere where you want to receive unsolicited mailings on a basis as irregular as the mail. Email me your address, and start checking your mail box again. Really. Do it.

Or mail me!

4835 SE Sherman St.
Portland, OR 97215

Through codec and bitrate, and gloom of social media, nothing will stop mail from pointedly plodding from one place to somewhere else. Except for no delivery on Sundays.

10/26/2009

And What Have I Done....

I've been playing with Wave for most of the afternoon, and it's been fun, but still pretty frustrating. It's a lot like moving into a new building while their still installing light fixtures. There is dust everywhere, many things don't work right, but mostly it's just exciting to be the new building and wander around, not really using the space, but just enjoying the new architecture.

There's all the stuff I could say about how it's amazing, etc, but I won't, because if you care at all I'm sure you've read it already somewhere else, in the near thousand blog posts that have just recycled the commonly held knowledge we all already saw in the video. So I'm not going to write about that, but simply record a few of my observations about wandering around in this new community center, just opened to the public, at least those willing to step around the ladders and buckets and stuff.

Why? Well, maybe that will be a little apparent by the time I'm done. Also, the reason you might be reading this either in Wave, or on a good "old-fashioned" blog might also be clear. But enough with the preludes, and let's get to it.

Firstly, none of the things I want to use work. True, I am writing a wave right now, or a "blip", if this new lingo is to be trusted. And after about five different "beginner's guides" I finally figured out how to search the public waves, so yes, I can do that too. But the Twitter functionality won't authenticate, and while I had some limited success getting a wave to show up on both a Word Press site and Blogspot, I would hardly call it really functional. So I'm stuck with... waves. Lost at sea. Adrift in the malestrom. Metaphors ad nauseum.

You see, this is really want I wanted to do--I wanted to use a single platform for instant web publishing. I wanted to open one control screen, and instantly slingshot my words to all the many repositories I keep on the network. I wanted to finally have one Google product to rule them all, and with the instant-update quality that is defining the mobile infinite-net. Instead, I am still trying to untangle javascript and making use of copious Ctrl C. I've written about it before... the dream of a easily accessible, atemporal network linking the contributing consciousnesses of the world in as much of a tangle singularity as it will probably ever get, what with our bizarre and varied tastes in personal hygiene and all.

But that's okay, because this is merely a preview of this game-changing, web-#.0-upgrading, temporal-continuity-destroying free web app. No need to get all broodingly philosophical on the first day, right?

Wrong! Look at these people! All of these villagers running around, pulling on the levers and setting up tents and shouting and waving their arms at their friends, trying to find the best space, and maybe even get a little something done before the porn bots and social media marketing gurus show up, as we all are sure that they will, as we cautiously peek out of the windows and at the horizon, keeping the children, old people and animals close, trying to build as many huts as possible before those vikings come over the hill.

Here are some interesting things that are happening:

- The Rush to Institute A Little Goddamn Law and Order: let some sysadmins in, and all of a sudden it's all wikipedia in here! Some wild west; more like a starving puritan colony where the few people left are desperately trying to use Roberts' Rules of Order to figure out how to get the corn to grow. We need a little less parlimentary procedure, and a little more Squanto! No, I'm kidding--I think it's awesome that there is already such a term as IBA (Initital Blip Author), and FIRM rules like reply-moderated tags, the seperation of document and discussion waves, and a thousand little convoluted discussions about etiquette. All of these are the sort of things that don't really need to be discussed, because just like the rest of the Internet, these rules will develop if it's going to make any sense at all. And yet they still are discussed, and politely discussed again. Because we're all educated people here, and we just love consensus! So groundbreaking, and yet so anti-punk-rock, it just tickles me pink. Google Wave really might be the next big thing, if people keep taking it so damn seriously!

- The Beginning of a New Era Starts...: When? I don't know, it seems like everyone's already been here for ever. All the good public waves have over a hundred comments, though this will probably end up being nothing as soon as they really open the floodgates. Maybe it's just that I can't get the Playback function to work, but it seems like Google Wave is going to have the same problem all new provinces and colonies have--everyone is too busy trying to survive to take down any history. Not that it's crucial to humanity to document these first few waves of the coming info-nami. Hell, the Internet isn't really exciting enough to keep a record. It's too big, too watered down, and too lumbersome to track each individual sweat gland of the beast, spitting moisture off into the tiny mossy filaments it is always trampling under foot. And yet, right now it seems like Wave is a community, or at least in some sense. The minute Wave becomes just another part of the Internet, that community will be lost. Anyone know where to read about the beginnings of Twitter? Something about some concert in Texas? Whatever, I'm just glad I started following Britney before she hit the million mark. But shouldn't somebody be writing something down? Who wants to be secretary? Nobody? Okay, cool. I mean, I'm not going to do it either. Just sayin'. This will probably seem pretty weird a year from now. But then again, time always does.

- JOIN US, BECAUSE WE'RE ALL IN IT TOGETHER!: But there is something kind of weird about these beginning times. I remember as a kid thinking it was so awesome that I could dial into a BBS and play tic-tac-toe against someone in another part of the state! I probably never played so much tic-tac-toe in my life as when my dad brought home a 14.4 modem. Times have changed of course, because now you can sign into Google Wave to play Sidoku against folks in Malaysia. But that's not all that's going on here. People are making rules, forming committees, and inventing new RPGs! People are making widgets, and handing out javascript samplers, and starting brand new photo pools, and talking about religion and the Internet and food and who knows what else. Everyone is getting into it, because it's new, and they want to get down. Hell, I'm trying to write some real time essays on it. Why? Because maybe it will be totally awesome, that's why. And at any rate, if something else awesome happens here, I'll be around when it happens. The numbers are still small enough that I can watch the public waves update, and be able to make sense of it. I can even recognize some avatars in the miniature view. We might as well be neighbors here in Google Wave. It's not just new tech, it's new Internet, and everybody's getting involved. And can you blame us? Remember how awesome the first Internet was?

And Other Great Prophecy: who knows what? Who knows what will be in the pipeline tomorrow? Who knows when stuff will really start to work? Maybe tonight. Maybe a week from now. Maybe when they finally release that hot gadget like they talked about and it works great and everybody loves it. Maybe not for a year, until the American Workers' Revolution is Wave-Cast, and the face of politics (and don't forget that pain in the ass, media) is changed beyond all of our wildest dreams. What? Don't worry about it. It's prophecy! We all know how awesome technology is, and now it is so awesome that we can all tell the future. Time has folded in half; time is a wave; periodization has reduced its wavelength to the infinitesimal scale of instantaneousness, and we are all a giant numeral one in the center of a sudoku grid with only one box. Shit is crazy, and you are/will be/have been there. So wave your hands like you just don't care. Because it's a new mediapocalypse every day, and if you don't sign in, you might just not even notice and instead do something else.

So until then, and for as long as it lasts, I'll send you my dispatches from the forefront of the bottom of the wave. You might not be able to read them, because they'll get lost in the cloud, or I'll forget to make it a public wave, or maybe you don't have an invite yet so all you see is a YouTube video rather than my words. But what is this if not a sign of the times, and proof of the cutting edge? Cultural incompatibility is the sign of big changes. Far be it from me to try and dumb down history for us. That's, like, some professor's job.





If you like this writing, or other stuff I've written, drop me a line, and the next time I clog the public wave feed, I'll make sure to add you. I know, this is kind of bootleg, but hey, this is Google Wave, baby!

6/07/2009

Twitter, Semiotics and Programmatics, and Running out of Characters

I've been reading with interest certain sources regarding the push, from numerous entities, to create conventions of microsyntax, microstructures, and other neologisms for symbolic text in short, small messaging, with Twitter being the ideal service. I have written a bit previously on the semiotics of Twitter and its unique 140-character format, both on the way it shapes common language and symbolic language within its use. (Here; and Here)

The goal it seems, is to develop conventions to increase the use of symbolic language within common language, to increase the way the firm 140-character limit can be used. Here are some pulls from some of the various interested parties out there:

"[O]ur goal is not to turn Twitter into a mere transport layer for machine-readable data, but instead to allow semi-structured data to be mixed fluidly with normal message content." (http://twitterdata.org/)

"Nanoformats try to extend twitter capabilities to give more utility to the tool. Nanoformats try to give more semantic information to the twitter post for better filtering." (http://microformats.org/wiki/microblogging-nanoformats)

"These conventions are intended to be both human- and machine-readable, and our goal here is to: 1. identify conventions in the wild, as users or applications begin to apply it.2. document the semantics of the microsyntax we find or that community members propose, and 3. work toward consensus when alternative and incompatible conventions have been introduced or proposed." (http://microsyntax.pbworks.com/)

Very interesting! Besides the cool buzz words like "nanoformat" and "microsyntax", which are just itching to be propelled into circulation by the NYT's tech section (after which I will hear them again from all the publishing blogs), I am captivated by the goal of sematicization of content for people and machines--equally and fluidly. This is some cyborg shit, here.

The explosion of content in Twitter has created a need for programs and applications to help parse the data, to keep it usable. One can only follow so many people, and with the increase of users and the increase of posts we quickly reach a saturation point. As the Twitterverse of apps taking advance of the simple Twitter API grow, this saturation is compounding upon itself, and Twitter is becoming less of a site, more of a service, and even, little by little, a format.

The Internet has given substance to all sorts of linguistic structures, from the densely complex (at least to the non-adept) programming languages of Flash, Javascript, etc to the slightly more accessible "read-only" HTML, to the linguistically simple email, and even to the real-life-human-interface replicators of video/voice chat. However, each of these seem to find their place on one side of a categorical boundary, which I will call the signification language/programmatic language boundary. I'm about to launch into several of these categorical boundaries—which are somewhat dense distinctions of theoretical concepts, which often overlap as much as they differ. However, because semiotics, or the study of “meaning”, is about these very distinctions, I use them as diagrams or illustrations to try and get closer to a certain sense of meaning which I believe is relevant to the conversation.

Signification language is, simply, all common language and syntax as we know it, being that we are thinking, speaking, understanding humans. This is language built from signifiers, intending to reach the signified, or some ideal variation thereof. It is language which, as we know it, attempts to "mean" something.

Programmatic language, on the other hand, is still built from signifiers, but not intending to relate to the signified directly. Another way to put it is that programmatic language does not have pure content. Programmatic language is built from signifiers which are meant to interact, and thereby perform a linguistic function to content, but this content is separate like a variable, and therefore kept categorically separate from the rest of the signifiers with programmatic meaning. What I'm saying in a round about way, is that this is a programming language. You cannot speak Flash. You can know Flash, and by compiling and understanding it via a "runtime", interact with content in various ways. The content is what is being spoken and understood, but being spoken and understood through Flash.

(This would be as good a time as any to remove any remaining doubt, and admit that I have only a basic understanding of simple programming. However, I believe I understand the concept enough to talk about it, at least from a semiotic standpoint.)

A good example of the programmatic is Pig Latin. It takes a language that does mean something, and converts it programmatically into a new form, which can easily be understood by anyone who can parse the program. Another example is the literary tool known as metaphor--anyone who can parse metaphor knows that it is not meant literally, and therefore he or she is able to easily search the surrounding content for the analogical terms of the program: A is to B as X is to Y. Logic is another sort of program; gold is yellow/all things yellow are not gold—this has meaning because of a way of understanding how it means, not only what it means. And so on. In fact, it might be said that the rules of grammar and syntax for our signification languages are themselves a programmatic component of signification, and this would not be totally incorrect. (And here is the overlap of the categories.) We are not reliant on grammar and syntax to signify, but for those attuned to the programmatic language, it transforms the content and allows it to have a new dimension of meaning: a new how it means. This new dimension, though not always being dogmatically utilitarian, is always related to use. Language is the use of language, whether in the act of signification, programmatic interaction, or wild, totally incomprehensible expression.

There is another concept I'd like to throw into the mix. This is the duality between free-play, and universalization. It is, like the signification/programmatic duality, not exactly mutually exclusive. Free-play is mostly related to signification, because it occurs in the act of signification, along with intent. We gain new signifiers and meanings by a poetic play of the signifers. Universalization works in the opposite direction. By forming a hard and reproducible definition of a concept, word, or action, we can ensure that meaning will not mutate, and anyone who avails themselves of this definitional quality can be reasonably sure the meaning can be established between various people, unified by the universalization of the concept. A certain amount of both of these occurs in all language, but signification can be almost entirely free-play (e.g. “You non-accudinous carpet tacks!”) and programmatics can be nearly pure universalization (e.g. “def:accudinous=0”). However, signification must also contain a great deal of universalization in order to mean anything more complex than simple emotional outburst. And programmatics contains free-play as well (everyone knows programming is quite creative, despite the stereotype). It is the difference between these two ideas that gives them their power--not their exclusivity. To take the Pig Latin example again: one could easily write a program to translate a poem into Pig Latin. It's strictly universal, and accurate. But could one write a program to translate poetry, and maintain its poetic play? Much more difficult. But try employing a poet to translate things into Pig Latin. It might work, but you'd be better off with a program that can streamline the univeralities.

So, the goal of microsyntax (I'm just going to choose one term and stick with it) is to create a certain amount of universalization of programmatics, such that the content of Tweets can function programmatically, to better increase the quality of the content in the form. However, there is also a strict attention to maintain the programmatics within an overall format of free-play signification. This seeks to maintain wide use, ease of human understanding as well as computer parsing, and to maintain the free-play aspects that have made Twitter so popular.

The reason I have bored you with all of these mutated semiotic terms, is so I can explain just how interesting this goal is. I can think of very few attempts to institute such a composite of signification and programmatic language in our linguistic world. There are plenty of overlaps in daily use of language between these concepts, though no defined interaction between them as a goal. There are some abstract examples where the goal is implied. --World of Warcraft, or any other MMORPG, for example, is a combination of a signifying social network with the programmatic skill set of playing an RPG. Of course, the programmatic aspects of the game, once mastered, take a formulaic back seat to the social, conversational aspect of guilds and clans. You can even outsource your gold mining to Asia, these days.

So Twitter is at least somewhat unique in that developers of microsyntax are taking into consideration the fact that the programmatic will be bonded and joined, fluidly, with the signification language of the medium. These are programmatic techniques developed for the user. Basically, we are asking IM users to learn rudimentary DB programming--and expecting them to do so because it is fun and useful. If you don't see this as fairly new and quite interesting development, then you are probably reading the wrong essay.

So what is unique about Twitter that is causing this interesting semiotic effect? What is it about this basically conversationally-derived medium is causing us to inject it with programmatics?

This is what it Twitter does--it takes text messaging, a signification language, and adds some programmatic features. First: a timeline, always (or nearly so) available via API. Second: conjunction, i.e. the “following” function; one can conjoin various accessible timelines into one feed. Third: search; one can search these timelines, within or across following conjunctions.

But these features are not within the signification matrix. The timestamp may be metadata, but the availability of timelines, follow lists, and the search are only available via the service framework and its API. Without the service's presence on the Internet, few people would be using Twitter, because even if you can follow and unfollow via SMS, how are you going to decide who to follow without search features and the ability to abstract the conjunctions by peering at other people's timelines? You might as well be texting your number to ads you read on billboards, trying to find an interesting source of information. With the web app, you can actually use the service as a service, and utilize its programmatics to customize your access to the content.

So how does the programmatic features begin to enter the content? I think it is because of the magic number 140. Because of this limit, the content is already undergoing some programmatic restraints to its ability to signify. Like in an IM or an SMS, abbreviations and acryonyms are used to conserve space, while still transmitting meaning. But this is a closed system; this bit of programmatics continues refer to the content. The interesting thing about Twitter is that the formal elements of the program within the text can reach out of the content, to the program of the service itself, and then back in to the content. In this way, it is completely crossing the barrier between form and content--not just questioning the barrier or breaking it, but crossing it at will. Because the content is restricted to a small quantity, around which the service's program forms messages, we are left with a thousand tight little packages, which we must carefully author. They are easy to make, send, and receive, but we have to be a bit clever to work within and around the 140.

This is a third semiotic category differentiation: the “interior” of content and the “exterior” of its programmatic network. As far as users are concerned, most Internet services are entirely interior. You create a homepage, or a profile, and via the links and connections this central node generates, you spread and travel throughout the network. You can view other profiles, but only via the context of having a similar profile. These other services are entirely interior, because it moves from the center outward into space, and there is no border between the service's content, and its programmatic functions that lead between elements of content. The hyperlink is an extension of the interior, not a link to any exterior. The developer of Facebook or some other service may be able to magically “see” the exterior and manipulate it, as if s/he is viewing the “Matrix”, but the user can only see the content.

Twitter is different, because the service, for all intents and purposes, is not much more complicated than the programmatics the user already must utilize. The programmatic exterior is visible, because it is such an important element of what makes the interior content function for the user. The simplicity of the 140 limit makes the junction between interior and exterior very apparent; because there is so little space for content, the programmatics are relatively simple, and necessarily very available. And because of this, the users willing to creatively explore new programmatics, to venture into this “exterior” with their “interior” content, continuing to bridge the gap, because the functionality is already bridged so often in their understanding of the Twitter language.

Here are some programmatic symbols that have proved themselves useful. @ was first find I believe (shouldn't somebody be writing a history of this?) allowing conjunctions to grow across timelines. Then (the development of which is traced by>http://microsyntax.pbworks.com), # similarly links posts into new timelines, not by user, but by subject. RT is a way of expanding and echoing content throughout new timelines, either across user-based timelines or subject-based. And then the ability of the Twitter service to recognize URLs allows the content to connect back to the rest of the Internet (and accordingly, URL shorteners, picture or other media storage, and anything else the web can hold).

All of these have been user-developed, and picked up and utilized by the wide-spread user base, thus proving their own efficacy. Eventually, the Twitter service has added their own features, recognizing these symbols as their own unique HTML Twitter tags, and giving native function in the form of links to the basic Twitter service, without requiring an app to do so. These symbols change and enhance the content of the Tweets, and allow the user to relate to and access content outside the Tweet itself, as well as interact between various Tweets in a universally understood way.

I know there are other symbols people use out there, but they are not as widespread as these I have just mentioned. This is an interesting facet of the programmatic Twitter symbol. Any symbol can intend any meaning, either through straight signification or programmatic use. But, to really enhance the Twitter medium, it must catch on. This allows it to function in the medium according to the programmatics of the form itself—the timeline, conjunctive networks, and search. If two friends have a secret code, that might provide a certain use between two people. But once that code becomes a language general enough for meaning to be intended to the broad base of users, and similarly, appropriated and used by them, then it is not a cypher, but part of a language itself. It's use will play until it develops a enough of a universal character to be available to just about anyone.

We have seen the Twitter service look out for these things and exploit/develop them, as any good Web 2.0 company should. One might call them “official”, or as much as anything about such a free service is official. Certainly, when Twitter recently changed the service such that @ responses would not appear in the timelines of those not following the respondee, this was about as official a service change as one could imagine.

This introduces another question, similar to this issue of “official” symbol universalization. We might call these questions “social” questions, because to the extent that language only occurs between individuals gathered in a mass, having the unique combination of free-play and universal programmatic meaning associated with content, the dictates of individuals does affect the language's use. Naturally, one cannot make a language illegal, or regulate its usage, but the attempts to do so will have an affect of some sort, even if not the intended effect. Social control of a language may not firmly control its meaning (content) or use (form), but it will change it certainly, on both accounts.

So the second social question(s) I would like to raise is, in addition to the effect of the reliance upon the Twitter service to adopt and officially universalize symbols' programmatic use, to what extent are we willing to base the designs of new symbols, and their open-sourced, community-driven conventions, upon a single service in a closed, controlled entity, just so happening to be a private company? I am not so interested in the intellectual property aspects (for the moment), but to develop a microsyntax for such a service is to develop a language that will be, in the end, limited and proprietary. What other services, forms, and media might the development of a microsyntax affect? To what extent should the microsyntax be limited to Twitter? To what extent will the usefulness of a microsyntax be affected by attempts to universalize or localize the language to a particular service? The size and popularity of Twitter seems to make these moot points, to some extent. Clearly a unique syntax is already developing, whether or not it is the best idea. But is “learning to speak Twitter” simply the best idea? Or should the semiotic lessons we learn from exploring microsyntax better applied to a wider range of media than simply a “glorified text message”?

I do not know the answers to any of these questions yet, nor do I really even have any idea of what sort of symbols should be included. Being the amateur semiotician I am, I have a different position to push.

The notion I would like to add to the discussion is a bit abstract, but I believe it is important. I would like to introject the concept of Authorship, for what good it may do (if any).

Authorship used to be the main source of innovative programmatics in language. Naturally, a main source of significatory content as well—but even more important than the stories themselves, were the way they were told. From the time of Homer, the author has held a significant position in language as the programmer, the prime mover, and the service provider. It was with a certain authority, a certain speaking of the subjective “I” transformed into universalized narrative, that an author was able to shape the use of language. Before the days of authors, perhaps group-memorized verbal legends were the original crowd-source.

And we're heading back there again. I'm not going to dignify Twitter-novels with discussion, but I think it is clear to say that unencumbered access to literature via digital technology is becoming more important to its consumption than the identity of the author. I don't think you can crowd-source the writing of a book per say, but you sure can't get anyone to read a book without a little bit of user-generated marketing.

But even though the author's may be a little disappointed at no longer being well-paid (or paid at all) celebrities, they still haven't lost their power over language. They have a poissance, in the “pushing”, or “forcing” sense of the world, as well as the potential. Perhaps they have lost their way a bit, and forgotten the power one can wield with a bit of forceful word-smushing (certainly folks have died for it in the past), but the capacity is still there. Authorship is a firm hand around the pen, or fingers on the keys.

I don't think this bit of figurative nostalgia is unrelated. One doesn't set up a new syntax by writing a white-paper or a blog essay—one does it by going out there and using the syntax. Proposing something is never enough; one has to use it with force, and let the force of the symbols become self-evident. If it is powerful, than it shall be. Language has developed, since the age of authors and perhaps even before, via loud shouts, firmly intended phrases, and elloquent incantations alike. We know there is hate speech, and are wary of its power. What about language with the power to build, or unite? Or simply to communicate with lightening speed—a linguistic Internet in symbols and syntax alone. I was fascinated by
Dune as a kid—the idea that the Atriedes had their own battle language, a secret language only used in matters of life and death, bowled me over. No, Twitter is not a battle language. But it is some sort of new language. Perhaps, an Internet Language.

But this is the problem with the Twitter service: it is ultimately reductive, in signification and programmatics. It's that damn 140 character limit—both the source of its semiotic innovation and all of its troubles. By being one of the first popular services to define both an inside and an outside to its content (all others that think of themselves as ever-growing blobs come to look like them too), it chose too small of a box. We need more from our Internet content than 140 characters can ever provide. Therefore the wild expansion is occurring on the outside, and the Twitterverse is becoming a horribly mutated and desolate place.

The truth is, interactions with ulterior apps via programmatics and the API are near worthless. Sure, you can develop some good client apps for writing posts, keeping track of multiple timelines, and searching. But micropayments? GTD lists? Real threaded messages and chats? Media sharing? These are all hopelessly wishful thinking. Just because the service is popular does not mean you can convince everybody, or even a critical mass, to accomplish all their Internet uses through a 140-character window. All of these things exist in “large form” in their own separate interiors, and to try and shrink them into the syntax of Twitter is to squish them too much, and fill up that little 140-character box to the breaking point. This is not to say we have uncovered all that Twitter has to offer—but it is to say that most of the invention is horribly un-programmatically authored.

Twitter's power lies in its communication—in
its content, rather than shoving content programmatically through an overloaded API. It delivers small, concise messages, and allows a certain amount of programmatic networking to access this content, in a brilliantly small and simple package. As authors, this is the avenue to develop for Twitter. To push Twitter and see what we can do with its programmatic content, not IPO-in-the-sky payday concepts.

But microsyntax need not begin and end with Twitter. What if we took the approach of the equally accessible interior/exterior, the content/programmatics approach we have found in Twitter, and applied it to other services, or created new services around this semiotic utility? What if rather than force all the exterior into a too-small 140-character interior, we developed an interior simple enough, say like plain text, and developed microsyntax to control the programmatic aspects of the access of this plain text in ways simple enough for any user to wield? What if a service was created not unlike email, that would route plain text on the basis of its plain text? What microsyntax could be added to email systems, for example? What about openly-readable, tagged, searchable email? Why not? Why are wikis constrained to web servers, like shadow-plays of web activity? Why aren't they linked via opt-in, streaming timeline conjunctions? Rather than storing an edit history, a wiki could be the timeline itself, constantly in atemporal motion rather than accumulating on a server. Anybody opting-in would be simultaneously reading and forming the wiki with their programmatically-intended text updates.

Twitter has also opened up the door. It has linked the programmatic with content in a way that appeals to millions of people, and could be argued to have provided real use to these same millions. Now that we, as authors wielding such methods, can see what it is doing to the usefulness of language, we have a new angle in which to push language. What other sorts of programmatic changes can we make to our content, both on Twitter, and in the rest of our linguistic world? Could we develop a microsyntax for every day speech? Certain microsyntax elements leak into speech already. What about long-form Internet writing? What symbols would improve its function, and what html tags would provide better access both inside and outside the text? What should be universalized, and what should get more free-play? Should we develop a taxonomy of tags? A symbol to denote obscure metaphor? The possibilities, and the potential, are near endless.

4/22/2009

Distributed Solidarity for Bloggers (link to it! link to it!)

A Wall Street Journal article is making the rounds, titled "America's Newest Profession."

No, not Urban Small Game Management!

Blogging.

The point of the article is that some 500,000 people or thereabout (in the USA) can under some measure of statistics be said to be considered "professional" bloggers, as in, making a substantial amount of money through clicks, freelance writing, or advertising and product placement.

None of which is really news to anyone who read the Technorati State of the Blogosphere 2008 report. (Oh you didn't? Ah--you must be the sort who reads the WSJ.)

Apart from the notable fact that the revenue model is still ad-based, with the EXCEPTION of the click-through (which is the selling of access, not of media proper--an important difference discussed elsewhere) this is really not such big news. I mean, maybe there are half a million Urban Small Game wardens in the country. Who knows? Who cares? I mean, when I see a small herd of cats making their way through my backyard, even I think of how easy it would be to pull out my knife and get a little bit of fresh meat to send to market. You see? Blogging is the sort of horrible urge to kill family pets that we just don't need to think about, unless we're hungry. (what?)

BUT:

This part caught my eye.

"And with millions of human-hours now going into writing and recording opinion, we have to wonder whether being the blogging capital of the world will help America compete in the global economy. Maybe all this self-criticism will propel us forward by putting us on the right track and helping us choose the right products. Maybe it will create a resurgence in the art of writing and writing courses. Or serve as a safety net for out of work professionals in the crisis. But for how long can nearly 500,000 people who are gradually replacing whole swaths of journalists survive with no worker protections, no enforced ethics codes, limited standards, and, for most, no formal training? Even the "Wild West" eventually became just the "West.""

Emphasis mine.

"Survival" is not really a problem. Blogging is easy enough that people are most likely transferring in and out of the ranks all the time--even those who earn money at it. Nor am I really concerned about "training" or "standards". These seem like old-school journalist complaints. Although I'm always in favor of more quality to writing, it is also true that it is difficult to teach good writing, and especially for a medium constantly evolving (as opposed to pap format journalism, which is perhaps the easiest thing in the world to teach, if you don't include good spelling.)

"Worker protections", on the other hand... interesting. Bloggers certainly seem capable of achieving a certain social standing, winning entrance to events and contact with sources via the fedora'd glamour of any old school reporter. And reputation among peers is also an easily gained trait for the worthy.

But these are all social, shifting categories, and as such, are about as structurally sound as the legs of the neighbor's cat underneath my lawn mower. (Sorry--I don't know what the deal is. I actually like cats. But I caught the neighborhood cat in my garage the other day. What was he up to in there? Cat's are so shifty. It could be anything!)

What sort of protections do bloggers need? Well, in essence they are free-lance writers. In my brief free-lance (for pay) experience, you kind of get the shaft. No benefits, complicated, crappy tax categories (damn you, 1099 MISC!), and generally, all the freedom of a poorly-constructed tower on a cliff, facing down the army of uninterested editors on one side, and the sea of poverty on the other. Not to say you can't be successful, or even wildly so, but it isn't exactly a entry-level job.

Not that writing ever was. But I suppose what is crucial here, is that on the one hand we are Evolving Towards a New Definition of Digital Literature and Journalism to Change the Face of Human Culture, and on the other hand, the people doing so are materially under the same professional model as the local shaman. As long as the magic is working, you are golden. But if your glimmer starts to fade through no fault of your own, you are just a crazy guy living in a dilapidated hut in the woods. And man, if the wrong kind of state religion happens its way into your town, then may Ba'al help you.

So, then what? An international consortium of independent bloggers? Nah. Sounds too much like just another blog badge. Cool little gif, but no real content there. Maybe a union? I like what I've been hearing about the Freelancer's Union--not much more than a way to transfer benefits around at this point, but it's a good start, especially for this day and age when more than 17 million workers are being forced into freelance and part-time.

But what unites blogging specifically? Actually, a lot of things. Hyperlinks, for one. Very few blogs owe their readership to their original content alone. It's a network of links, blog rolls, reposts, and comments. As you have noticed, if you read or write any number of blogs, there tend to be certain circles developing (some heady philosophy types might call them strata, but we'll leave that alone for the present). Bruce Sterling picks up something, Warren Ellis reposts it conjoined with a naughty picture, BLDGBLOG posts on it perhaps independently with more commentary, it makes the RT Twitter rounds, Sit Down Man, You're a Bloody Tragedy provides Marxist interpretation, and eventually BoingBoing throws it out to the masses, after which we find out it was originally from Coilhouse or somewhere. And some eager reader of all of these consumes it multiple times, enjoying them all, and feels like he's part of some ethereal community. In addition, some of these people behind the blogs actually know each other in the real world! Almost like we all work together, seperately, but together. Right guys? Right?

But Unionized--what exactly would that mean? Sure, there are some professional relationships here, but are bloggers going to take to the picket lines? Perhaps surprisingly, yes. It's kind of amazing to me, after a short life-time of witnessing mass indecision and stagnancy in the real world, how quickly people will jump on a righteous cause via the Internet. Any call for support of open-Internet intiatives, rejection and boycott of censorship or DMCA malfiance, or general attention to the plight of well-meaning, legally shaky artists are remarkably well-spread and widely backed. Of course, all issues do not succeed (because it is the real world) but often they do. It's an incredibly anti-authoritarian, libertarian Internet by the looks of it, which I couldn't be more happy about.

There are organizations out there like the Electronic Frontier Foundation, which act sort of like an Internet ACLU, knowing all the facts and case law, and advising individuals who are feeling the electronic boot upon their throats. But this is a defensive posture, albeit it worthy and entirely necessary. Other similar opt-in digital entities like Creative Commons also help bloggers certify their material and protect it with the sort of adaptable, flexible control that a medium like the Internet requires.

These are all good things, on the technical and digital/material end. But what about the other material side of things? What is the going rate for blogger advertising? Should the average blogger have better options than the rate AdSense provides? Have you seen what sort of a chunk PayPal takes out of credit card transactions? Why are most bloggers treated like consumers when it comes to the financial side of blogging? Are their other options than these?

I think we could get something going fairly easily. Start with an International--a general resource about publishing to the Internet for pay. Then, we set up Locals. A new, more useful version of the Web Ring. Upon application and admission, with the qualifications being something along the lines of a similarity in content, subject, revenue model, or whatever seems like a good strategy for developing a bloc of content providers, we could make collective bargaining a possibility. This could set up a distributed method for cleaning out unsavory advertising partners, setting up standards for pay and distribution of ads, and also providing an in-route for those getting into the field, and support for those already there.

The great part about it is that it would work well for both the large and the small sites. Even if 75% of the traffic for a Local is through 5% of the sites, there is no loss for larger sites taking on smaller sites, because traffic does not come into the Union unilaterally, but from the wide spread of the Internet. The sea of potential access is limitless--and this access is the commodity of the Internet.

However, here's the rub. This is a Union of workers--that is, not a chamber of commerce. The writers are the one's who are members, not their blogs. The difficulty is that many writers are now not only the one's in control of their blogs, but are, in effect, synonymous with them. How does "Joe's Blog" compare with any one of the zillion corporate blogs, which any number of people may be responsible for writing and publishing? Or with any other independent writer/publisher of music, photos, video, or anything else?

This is the brilliance of it. Everybody already knows who they are similar to, who does similar work, and produces similar products in similar payment schemes. You are already in a network; it is simply loose, shifting, and not-necessarily-organized. These people are your locality (if anyone calls it a hyper-locality, I'll spit at you). You get on board with them and set the standard for what you do, based on what you do. If you write travel blogs for Travel-Borg.com or whatever, get together with the food bloggers for Food-Puzzz.org. If you release exclusive mix tapes of hot club tracks, get together with the 78rpm audio archivers. If you post hilarious musings about the state of culture and listen to the echoes of your own voice bounce off those cold, cold concrete walls, well, you have your own problems--but probably someone else does too. Get together, and set up a standard for ad placement that doesn't squeeze the margins of your site like a torture device.

The true benefit of collective bargaining, and why it should be nothing less than a fundamental right of all workers, is that a worker does not have to struggle to fit themselves to a category in order to gain any sort of power or protection in his/her labor. The workers who work together can organize themselves, and decide what they need. Everybody works with somebody, and all of us work together. You utilize the power of the whole to set up the right conditions for the smaller groups and the individuals, with the base of the pillar always built first. Blogs have been moving along this route already. All blogs are part of the blogosphere, but individuals have made them what they are, in conjunction and with the support of their seperate, interlocking networks.

One of the amazing features of the Internet, in my opinion, is how it organizes itself. The only thing is, "it" does not exist. It is actually a lot of people with different lives, interests, and backgrounds, who somehow have been able to organize themselves without any centralization. It's easy, because the Internet is so easily re-writeable and malliable. (Infrastructure aside, but that is a topic for a seperate-but-not-quite-unrelated set of ideas...)

So I have a good feeling about the future of blog "worker protections". It won't happen by itself--but certain things have a way of happening, even despite the vertical power interests who might seek to prevent it.

So on the way there, just keep fighting that good fight, Internet folks.

4/07/2009

The Confused, Ice-Cream Stained Dogs of the Internet

I'm about to delve into a very confusing theoretical world. But don't worry—no matter what happens, we always end up popping out into reality on the other side.

In this world, people with confusing names do confusing things to confusing objects. Writer's write about writing, and theorist's theorize about theory. Then theorist theorize about writing, therein, writing it down. After which, writers write about theorists, and theorists write about theorists (they tend to talk about themselves). The writers write more about writing, which is then theorized, and possibly serialized by the theorists.

At the first call of serializing, the publishers show up. They can charge money for that! The publishers publish writing, not really caring what its about. This shocks both the writers and the theorists, who write and theorize about the publishers and publishing. Then eventually come to a conclusion: publishing is not writing. They proceed to write that down, and then the publishers publish it.

It's true: the publishers don't write much, or theorize either. It leaves them in a poor position to respond to the writers and theorists. Luckily, there are plenty of theorists and writers who write and theorize about theory and writing. The publishers publish this. Now the writers have theories about who is more responsible for publishing the publishing angle, and the theorists write many angry letters. The publishers publish these too. Eventually, everyone decides to scrap the publishers, and just write and theorize. Everyone agrees except the publishers (who don't write a damn thing about it), and they all go start a blog.

But then, they realize all their blogs are belongs to Google. Then shit really flies off the handle.

“Are belongs to”, in case you are not aware, is actually a complete possessive verb, not a grammatically incorrect infinitive. It is part of this very confusing world. It denotes ownzorship in the dimension of the Internet. Anything, singular or plural, can “are belongs to”, and most often it is “all” of whatever the subject noun happens to be (the etymology is vaguely bastardized Japanese, and Cuteish-English, see also “teh internets”). This is merely the way of the Internet, a dimension in which things are large in scope, and what is not included often doesn't exist.

None of these writer-theorist-publisher people actually exist either, at least not in the defined terms of these jobs as separate entities. I have just wasted your time by letting you try to follow my confusing little maze. I'm sorry. But many people are also wasting your time by having you think about things in confusing ways, and none of them have the shame to apologize for it.

But what I actually want to talk about is Hannah Arendt.

Hannah Arendt does not are belongs to teh internets, though it is hard to understand why not. Arendt is profoundly American philosopher in my opinion, despite her European lineage in both philosophy and nationality. Her philosophy is fairly easy to understand, and delves into topics long important to Americans and their other associated theorists—this being the republic that it is (all your greco-roman philosophers are belongs to e pluribis unum). And since many people would like to make the Internet a mirror image of our great (not to mention, well-designed and always functional) nation, one would figure... but, well, no.

But what I actually want to write about is the Internet. Since this requires a bit of theorization, I'm going to have to theorize about theorists to get to the theory. And this is what I'm going to write about. Starting now. No more philosophy/Internet jokes.

Henry Porter hates Google, and writes about it the way a dog would write about being pet by a four year-old. The dog understands that petting is often very good. It understands that hitting is very bad. But it does not understand why these small persons smelling of dirt and mashed foodstuffs run toward it with open arms, as if to pet, and then precedes to wack the dog about the ears and eyes with wide-open palms. Why? Why are they doing this? What should I do? Should I bite? Should I run? Fuck. I think I'll just lay down and accept it, and when all the people are gone, I'll pee on the rug.

“Google presents a far greater threat to the livelihood of individuals and the future of commercial institutions important to the community. One case emerged last week when a letter from Billy Bragg, Robin Gibb and other songwriters was published in the Times explaining that Google was playing very rough with those who appeared on its subsidiary, YouTube. When the Performing Rights Society demanded more money for music videos streamed from the website, Google reacted by refusing to pay the requested 0.22p per play and took down the videos of the artists concerned.”

Oww, ow-ow! You're supposed to be petting me!

Maybe my analogy was a little disingenuous. It's not very good form to compare editorial writers to animals, at least in the context of intelligent debate. Actually, he sounds more like a lobbyist for a bank or a large automaker, outraged at the idea that the federal government might force them to run an unprofitable business unsubsidized. Look—no one wants our product. Indeed, we admit that it is not worth what we say it is worth. But our livelihood depends on us getting money for it! You have to pay us! The future of this failed institution depends on it!

I would never let a dog starve.

Oh, but I'm rapidly proceeding in the wrong direction. I wanted to write about the Internet.

You might have thought all the dogs would be excited that there are people willing to pet them at all—and you would be right. The Internet is a remarkable resource because ________ (insert O'Reilly article, WSJ article, Google Army Oath of Allegiance here), and I think everyone is glad it exists. Kind of like a, say...

Democracy. A democracy of dogs. No, just kidding, a Democracy.

Democracy let's everybody have equal access to resources (supposedly), let's everyone have equal representation in distributing these resources (purportedly), and let's anyone say whatever they want about the state of the resources, its distribution, and editorial writers (woof). Democracy gives us something called the “public sphere”, a marketplace of ideas and personalities, in which we can mostly peaceably shape our world. A space of individual action, in which we can fully be our potential as human. This is the ideal, at any rate, and it is an marvelously high-minded ideal. But don't listen to the dog, here's Hannah Arendt:

“...the heart of the polis, in the sense of a “space of appearance” or a “public space,” sees an action (praxis) that is not a fabrication but the “greatest achievement of which human beings are capable.” […] The polis is not an actual location, as was the Roman city-state, with its legal underpinnings, but and “organization of the people as it arises out of acting and speaking together” that can emerge “almost any time and anywhere” as long as “I appear to others as others appear to me.” The pois is thus the locus of the in-between, a political model that is founded on nothing less than “action and speech,” though never one without the other.” (Julia Kristeva's Hannah Arendt, 71)

This is actually Kristeva speaking about Arendt's On the Human Condition. Sorry, I own a copy of the former, not the latter. But I believe she is quite on point in her descriptions, so it will do. Theorist about theorists, etc. Blame it on distribution problems.

Based upon these ideals and the function of the Internet, one would think we would be heralding Google into our lives: a wonderful super-polis of action and speech. Google is not merely a sales-site, or a marketplace. It is a conjunction of services that help one articulate oneself to others. This is more than your blog or your email. This is your phone, your location, your data and documents, and even your health records. This is not a crappy Second Life avatar! This is your real life, available for you and your Contact list, in real, extra-legal, democratic polis.

What do you think, Henry Porter?

“Despite the aura of heroic young enterprise that still miraculously attaches to the web, what we are seeing is a much older and toxic capitalist model - the classic monopoly that destroys industries and individual enterprise in its bid for ever greater profits. Despite its diversification, Google is in the final analysis a parasite that creates nothing, merely offering little aggregation, lists and the ordering of information generated by people who have invested their capital, skill and time.”

Whoa! If I wanted to ask some radical Marxist, I would have asked myself.

Porter is absolutely wrong. A parasite is an organism living off of a single host creature, sustaining its life at the detriment of the host. Google is not a thing, but a network, and if it wasn't for its users, it wouldn't exist. One might as well call god a parasite, for forcing humans to build churches and nail each other to stuff. Send your complaints to Google Groups, man, because the big G's projected upward in your image. (Goodness, am I saying that god doesn't exist, or saying Google is god? Which is worse?)

And a “little aggregation” is a pretty idiotic thing to say. Sure, the most widely used, best-functioning search engine in the world is merely categorizing some apples in order of size. Mr. Porter must be a strictly Yahoo-man, I guess.

He is wrong yet again when he says this is an older capitalist model. (see the “other” Marxist, above). Clearly, a company that can earn $5.7B (his figure) in a quarter by doing nothing more than offering a “little aggregation” has stumbled on a goddamn machine that makes money, like, as if it new how to put ink on paper that looked like money, or something.

I've said it before, so here's another time: the business is access. By showing every person entering the Internet agora an ad, you are going to be pulling it in, hand over fist. No, you are not producing anything; you are cornering distribution. A capitalist activity for sure, but certainly in a new mode, because the digital product being distributed, according to our little theory of democracy, is not just crappy pop songs, but people's lives.

This moralist—because that is what he is, a propagator of one strange fold within the shifting sands of value—cannot even identify what is going on here. He only seems to know he doesn't like it, and writes up some writing tropes about publishing that sort of make it seem like he has a reason for this dislike of a particular company and its business, when in actuality he doesn't seem to understand the technology, the product, or its producer and user.

But what would Hannah Arendt say?

Few would ask. But check it: included in her notion of this public arena, is also the private sphere. The private sphere is, well, private—concerning the less-ideal world of homo faber, the productions of the home and the body, and the private “plumbing” of the individual. There is great power working in this individual realm, often tyrannical power. The forces of desire act with unrestrained cruelty on occasion, as much (or in concert) with love as anything else. And the body functions, well—don't let us get started. But this is where these unfortunately unideal, yet necessary activities and their vulgar, undemocratic power and productions remain: the private. And by this the public sphere remains free and uninhibited. Ideally, it sounds okay.

“Once the needs of production surpass the limits of the family and encroach upon the city-state itself, the ensuing flow of household concerns (oikia) into the public realm erases the boundary between the private and the public, which puts various sorts of freedom in jeopardy. At that point, public activities are conceptualized according to the model of familial relationships: the “economy” (from oikia: “household”) is what eats away at the polis and transforms it into a “society”. […] “Society equalizes (and) normalizes” to the point where “the most social form of government” is “bureaucracy”.” (Kristeva, 159-60)

So, if the personal, private, animalistic “needs” of humanity are appointed to the control of the public arena, the ideal polis is reduced to an economy, a control of resources, and often in a way that is nothing like “free”, despite the promises of society. And how could it remain free? Many have written about how brutal we are in pursuit of our material needs, despite our hopes to the contrary. So if fairness, justice, and democracy are put in charge or their procurement, is it so surprising to see the polis lean towards oppression and prejudice? If one has read Arendt's Eichmann in Jerusalem, one can see where she is going by mentioning “bureaucracy.”

So which is the Internet? Is Google keeping the polis pure by not charging money for YouTube views, or is it already too late, because it is attempting to create a society in what should be a pure medium, by letting us conduct our private needs in person? And what would Arendt have us do?

Porter, by way of contrast, is one of those most enlightened neo-liberals who says the corruption of bureaucracy and distribution can easily be overcome, as long as everyone listens to what he thinks is fair. Newspapers, according to him, are bastions of democracy, while Google is “anti-civic”. He can equally chastise Google for censoring the Internet via its services in China, and defend those who feel violated when their homes are pictured on Google Street View. The high-point of liberal capitalist values are here—everyone should be able to hear my words, but nobody should be allowed to see my property. Hedges and billboards go hand in hand, it seems. Google sure “owes” him a lot, after all, he's a conscientious tax payer!

With Arendt, it's not as clear, thankfully. She never wrote or theorized about the Internet, so it is difficult to say where she would have lined up digital productions—is this the free speech of the polis, or the private machinations of the home?

Well, I wonder, what's the difference? These days, our speech is about our homes, we work out of the home, and we speak about our work. Our homes are no longer private places, now grouped in stacked architectures, as is our work, and as are our lives. Our home life is often conducted out of property proper: at the restaurant, the workplace, or any place with wifi. As is our work, which more often now, is being conjoined and intertwined to our lives. Even the wage slaves among us are feeling this clock-punching as yet another rhythm of our internal clocks. Perhaps it's alienation, or perhaps we always were the aliens—dividing up the labor of our world since long before Aristotle's perfect state. Work, it seems, is more and more of what human beings call home. And we're more than willing to talk, email, video blog, or twitter about it.

This is simply going around in circles, back to our little game of writing, theorizing, and publishing. Is any of it really different here? We've made these ideals, these constructions, and we refuse to look through them to the actual machinery. We can build walls around our homes, with proprietary speaking tubes built in, demanding free concrete from the government with the best sound transmitting capacities available. Or, we can stop for a minute, and think about what the hell we're talking about.

Googles, gods, and gadflies, all.

Here's the thing: there only ever was teh private sphere, all are belongs to you. It only keeps getting increasing. We're running out of public space in which to build our private sphere.

The private sphere was that nice little cozy place inside your head where everything was yours. What needed to be distinguished from the anything else? All was a lovely muddle of sensation, and no one could tell you otherwise.

But then, you found the others. You tried to ply them with your sensation, but the connection just wasn't there. You needed something more, a way to distinguish ignored desire from acknowledged reaction. And so, you found yourself, and the others, as separate beings. So many interesting private spheres out there, with their own furnishings and color schemes! Could we ever visit them all? Let's set up a public space—and call it “the world”. You can build your own avatar called the ego, and walk it about—talk, dance, and buy stuff, even though most of it is worthless.

The undifferentiatedness was still below, offline. No matter how public you became, that private part of you was still within. It would lurch skyward, blocking the sun from shining on the brilliant buildings you and your friends had constructed from those lovely cultural polygons. But what still lay in that darkness, and why did it persist, no matter how civilized you had made yourself to be? Chaos' horrible ocean was always waiting just beyond the ever-so democratic light of day. Under the person, the animal still remains.

And thus, we learned the power of “we”: the majority, the restraint of purpose against will, the state against the people. Justice, form, control: the super ego. Super users, admins, and designers. The sorts of people who make the decisions about which now we can make decisions. And what's more—a special little censoring admin for each of us, telling us when we can enable our anger function, and whether our custom skin is appropriate for any given interface. And these functions don't even mention below the surface, below the face of our license agreements with ourselves—this opaque, liminal space where our super egos censor our comments and our desire-searches, and make undercover deals to sell our personal data to so-called “loved ones” (all for the goal of 'giving us a better consumer relationship', right?) and use us as test marketing and advertising guinea pigs to shape our likes and dislikes according to the availability out there in the polis, which is quickly becoming a marketplace—or maybe it always was. Some say the mind is a free-market, but if there is one brain controlling all my decisions, how free can I really be?

Super ego and its metaphoric counterparts, have run rampant through our society. From Porter, and the dogs of morality, to the well-meaning Arendt and her Platonic Ideals, we still attempt to control what we fear by covering it, dividing it from our “true” selves, from “real” society. Bureaucratic constraint over the unconscious manifests itself in every “I” statement; the small fascisms of “appropriate, state-sanctioned, volk-positive” desire still reign over our sense of outrage, offense, and indignation; the penny-pinching boss of our corporate ego still cracks the whip over the unruly factory of our emotions.

But this—what I'm doing here with this crafty unification between the necessary limits to our conscious self, and the world of information and “real-life” production—this publishing, theorizing, and writing is itself only a form of control. We can never get at that private sphere or set it truly free. We can only build endless marketplaces where we hope it will show up one day, for a chat and perhaps a little pleasant business. It is the unconscious after all, and to our ego's—our false ideals of consciousness out here in the real world—it is nothing more than an overriding metaphor, hidden in the dark ink of words on a page or a screen.

If it is all a pleasant, metaphoric dream, then what is public and private; what is proper and what is piracy? We could loosen up our constrictions, and pause our accounting for just a moment. After all, when dealing with resources and digital production, in the end we really are dealing with people. People need the data, and people need the access. It is not an ultimate good, but it is a mostly-good. This is what we have to ask of Google, and anything that attempts to tell us how to act in public or private, in a marketplace or in a polis. What is in it for us: the egos, the unconsciousnesses, the producers, the consumers, the people themselves. Enough of your ideals, both puritan and democratic. This is the question of human's material existence, as it is currently best phrased. How do the means of digital production best benefit the production process, from creator, distributor, to consumer? Any other theorization of the problem must fall upon some notion of an ideal, so loosely defined, shifting sphere of action simultaneously including that which is not unified and excluding elements of the equation. Only consideration of the material as material, hence, as product in relationship with production, can attempt to consider it in its true state, not as owned, free, public, or private, but as information, completely material only in the dark processes of our imagination.

Yeah, it's a hard materialist-psychoanalytic line to follow. I will say, true breakthroughs very rarely come from the “common logic”, or “public view” (as if this was at all convincing). And anyway, you could spend your time writing what the super ego tells you to write, or you could theorize what your consciousness couldn't imagine. I'll tell you this much—regardless of which category resembles Blogger, one of them is a lot more fun.