I've been reading with interest certain sources regarding the push, from numerous entities, to create conventions of microsyntax, microstructures, and other neologisms for symbolic text in short, small messaging, with Twitter being the ideal service. I have written a bit previously on the semiotics of Twitter and its unique 140-character format, both on the way it shapes common language and symbolic language within its use. (Here; and Here)
The goal it seems, is to develop conventions to increase the use of symbolic language within common language, to increase the way the firm 140-character limit can be used. Here are some pulls from some of the various interested parties out there:
"[O]ur goal is not to turn Twitter into a mere transport layer for machine-readable data, but instead to allow semi-structured data to be mixed fluidly with normal message content." (http://twitterdata.org/)
"Nanoformats try to extend twitter capabilities to give more utility to the tool. Nanoformats try to give more semantic information to the twitter post for better filtering." (http://microformats.org/wiki/microblogging-nanoformats)
"These conventions are intended to be both human- and machine-readable, and our goal here is to: 1. identify conventions in the wild, as users or applications begin to apply it.2. document the semantics of the microsyntax we find or that community members propose, and 3. work toward consensus when alternative and incompatible conventions have been introduced or proposed." (http://microsyntax.pbworks.com/)
Very interesting! Besides the cool buzz words like "nanoformat" and "microsyntax", which are just itching to be propelled into circulation by the NYT's tech section (after which I will hear them again from all the publishing blogs), I am captivated by the goal of sematicization of content for people and machines--equally and fluidly. This is some cyborg shit, here.
The explosion of content in Twitter has created a need for programs and applications to help parse the data, to keep it usable. One can only follow so many people, and with the increase of users and the increase of posts we quickly reach a saturation point. As the Twitterverse of apps taking advance of the simple Twitter API grow, this saturation is compounding upon itself, and Twitter is becoming less of a site, more of a service, and even, little by little, a format.
The Internet has given substance to all sorts of linguistic structures, from the densely complex (at least to the non-adept) programming languages of Flash, Javascript, etc to the slightly more accessible "read-only" HTML, to the linguistically simple email, and even to the real-life-human-interface replicators of video/voice chat. However, each of these seem to find their place on one side of a categorical boundary, which I will call the signification language/programmatic language boundary. I'm about to launch into several of these categorical boundaries—which are somewhat dense distinctions of theoretical concepts, which often overlap as much as they differ. However, because semiotics, or the study of “meaning”, is about these very distinctions, I use them as diagrams or illustrations to try and get closer to a certain sense of meaning which I believe is relevant to the conversation.
Signification language is, simply, all common language and syntax as we know it, being that we are thinking, speaking, understanding humans. This is language built from signifiers, intending to reach the signified, or some ideal variation thereof. It is language which, as we know it, attempts to "mean" something.
Programmatic language, on the other hand, is still built from signifiers, but not intending to relate to the signified directly. Another way to put it is that programmatic language does not have pure content. Programmatic language is built from signifiers which are meant to interact, and thereby perform a linguistic function to content, but this content is separate like a variable, and therefore kept categorically separate from the rest of the signifiers with programmatic meaning. What I'm saying in a round about way, is that this is a programming language. You cannot speak Flash. You can know Flash, and by compiling and understanding it via a "runtime", interact with content in various ways. The content is what is being spoken and understood, but being spoken and understood through Flash.
(This would be as good a time as any to remove any remaining doubt, and admit that I have only a basic understanding of simple programming. However, I believe I understand the concept enough to talk about it, at least from a semiotic standpoint.)
A good example of the programmatic is Pig Latin. It takes a language that does mean something, and converts it programmatically into a new form, which can easily be understood by anyone who can parse the program. Another example is the literary tool known as metaphor--anyone who can parse metaphor knows that it is not meant literally, and therefore he or she is able to easily search the surrounding content for the analogical terms of the program: A is to B as X is to Y. Logic is another sort of program; gold is yellow/all things yellow are not gold—this has meaning because of a way of understanding how it means, not only what it means. And so on. In fact, it might be said that the rules of grammar and syntax for our signification languages are themselves a programmatic component of signification, and this would not be totally incorrect. (And here is the overlap of the categories.) We are not reliant on grammar and syntax to signify, but for those attuned to the programmatic language, it transforms the content and allows it to have a new dimension of meaning: a new how it means. This new dimension, though not always being dogmatically utilitarian, is always related to use. Language is the use of language, whether in the act of signification, programmatic interaction, or wild, totally incomprehensible expression.
There is another concept I'd like to throw into the mix. This is the duality between free-play, and universalization. It is, like the signification/programmatic duality, not exactly mutually exclusive. Free-play is mostly related to signification, because it occurs in the act of signification, along with intent. We gain new signifiers and meanings by a poetic play of the signifers. Universalization works in the opposite direction. By forming a hard and reproducible definition of a concept, word, or action, we can ensure that meaning will not mutate, and anyone who avails themselves of this definitional quality can be reasonably sure the meaning can be established between various people, unified by the universalization of the concept. A certain amount of both of these occurs in all language, but signification can be almost entirely free-play (e.g. “You non-accudinous carpet tacks!”) and programmatics can be nearly pure universalization (e.g. “def:accudinous=0”). However, signification must also contain a great deal of universalization in order to mean anything more complex than simple emotional outburst. And programmatics contains free-play as well (everyone knows programming is quite creative, despite the stereotype). It is the difference between these two ideas that gives them their power--not their exclusivity. To take the Pig Latin example again: one could easily write a program to translate a poem into Pig Latin. It's strictly universal, and accurate. But could one write a program to translate poetry, and maintain its poetic play? Much more difficult. But try employing a poet to translate things into Pig Latin. It might work, but you'd be better off with a program that can streamline the univeralities.
So, the goal of microsyntax (I'm just going to choose one term and stick with it) is to create a certain amount of universalization of programmatics, such that the content of Tweets can function programmatically, to better increase the quality of the content in the form. However, there is also a strict attention to maintain the programmatics within an overall format of free-play signification. This seeks to maintain wide use, ease of human understanding as well as computer parsing, and to maintain the free-play aspects that have made Twitter so popular.
The reason I have bored you with all of these mutated semiotic terms, is so I can explain just how interesting this goal is. I can think of very few attempts to institute such a composite of signification and programmatic language in our linguistic world. There are plenty of overlaps in daily use of language between these concepts, though no defined interaction between them as a goal. There are some abstract examples where the goal is implied. --World of Warcraft, or any other MMORPG, for example, is a combination of a signifying social network with the programmatic skill set of playing an RPG. Of course, the programmatic aspects of the game, once mastered, take a formulaic back seat to the social, conversational aspect of guilds and clans. You can even outsource your gold mining to Asia, these days.
So Twitter is at least somewhat unique in that developers of microsyntax are taking into consideration the fact that the programmatic will be bonded and joined, fluidly, with the signification language of the medium. These are programmatic techniques developed for the user. Basically, we are asking IM users to learn rudimentary DB programming--and expecting them to do so because it is fun and useful. If you don't see this as fairly new and quite interesting development, then you are probably reading the wrong essay.
So what is unique about Twitter that is causing this interesting semiotic effect? What is it about this basically conversationally-derived medium is causing us to inject it with programmatics?
This is what it Twitter does--it takes text messaging, a signification language, and adds some programmatic features. First: a timeline, always (or nearly so) available via API. Second: conjunction, i.e. the “following” function; one can conjoin various accessible timelines into one feed. Third: search; one can search these timelines, within or across following conjunctions.
But these features are not within the signification matrix. The timestamp may be metadata, but the availability of timelines, follow lists, and the search are only available via the service framework and its API. Without the service's presence on the Internet, few people would be using Twitter, because even if you can follow and unfollow via SMS, how are you going to decide who to follow without search features and the ability to abstract the conjunctions by peering at other people's timelines? You might as well be texting your number to ads you read on billboards, trying to find an interesting source of information. With the web app, you can actually use the service as a service, and utilize its programmatics to customize your access to the content.
So how does the programmatic features begin to enter the content? I think it is because of the magic number 140. Because of this limit, the content is already undergoing some programmatic restraints to its ability to signify. Like in an IM or an SMS, abbreviations and acryonyms are used to conserve space, while still transmitting meaning. But this is a closed system; this bit of programmatics continues refer to the content. The interesting thing about Twitter is that the formal elements of the program within the text can reach out of the content, to the program of the service itself, and then back in to the content. In this way, it is completely crossing the barrier between form and content--not just questioning the barrier or breaking it, but crossing it at will. Because the content is restricted to a small quantity, around which the service's program forms messages, we are left with a thousand tight little packages, which we must carefully author. They are easy to make, send, and receive, but we have to be a bit clever to work within and around the 140.
This is a third semiotic category differentiation: the “interior” of content and the “exterior” of its programmatic network. As far as users are concerned, most Internet services are entirely interior. You create a homepage, or a profile, and via the links and connections this central node generates, you spread and travel throughout the network. You can view other profiles, but only via the context of having a similar profile. These other services are entirely interior, because it moves from the center outward into space, and there is no border between the service's content, and its programmatic functions that lead between elements of content. The hyperlink is an extension of the interior, not a link to any exterior. The developer of Facebook or some other service may be able to magically “see” the exterior and manipulate it, as if s/he is viewing the “Matrix”, but the user can only see the content.
Twitter is different, because the service, for all intents and purposes, is not much more complicated than the programmatics the user already must utilize. The programmatic exterior is visible, because it is such an important element of what makes the interior content function for the user. The simplicity of the 140 limit makes the junction between interior and exterior very apparent; because there is so little space for content, the programmatics are relatively simple, and necessarily very available. And because of this, the users willing to creatively explore new programmatics, to venture into this “exterior” with their “interior” content, continuing to bridge the gap, because the functionality is already bridged so often in their understanding of the Twitter language.
Here are some programmatic symbols that have proved themselves useful. @ was first find I believe (shouldn't somebody be writing a history of this?) allowing conjunctions to grow across timelines. Then (the development of which is traced by>http://microsyntax.pbworks.com), # similarly links posts into new timelines, not by user, but by subject. RT is a way of expanding and echoing content throughout new timelines, either across user-based timelines or subject-based. And then the ability of the Twitter service to recognize URLs allows the content to connect back to the rest of the Internet (and accordingly, URL shorteners, picture or other media storage, and anything else the web can hold).
All of these have been user-developed, and picked up and utilized by the wide-spread user base, thus proving their own efficacy. Eventually, the Twitter service has added their own features, recognizing these symbols as their own unique HTML Twitter tags, and giving native function in the form of links to the basic Twitter service, without requiring an app to do so. These symbols change and enhance the content of the Tweets, and allow the user to relate to and access content outside the Tweet itself, as well as interact between various Tweets in a universally understood way.
I know there are other symbols people use out there, but they are not as widespread as these I have just mentioned. This is an interesting facet of the programmatic Twitter symbol. Any symbol can intend any meaning, either through straight signification or programmatic use. But, to really enhance the Twitter medium, it must catch on. This allows it to function in the medium according to the programmatics of the form itself—the timeline, conjunctive networks, and search. If two friends have a secret code, that might provide a certain use between two people. But once that code becomes a language general enough for meaning to be intended to the broad base of users, and similarly, appropriated and used by them, then it is not a cypher, but part of a language itself. It's use will play until it develops a enough of a universal character to be available to just about anyone.
We have seen the Twitter service look out for these things and exploit/develop them, as any good Web 2.0 company should. One might call them “official”, or as much as anything about such a free service is official. Certainly, when Twitter recently changed the service such that @ responses would not appear in the timelines of those not following the respondee, this was about as official a service change as one could imagine.
This introduces another question, similar to this issue of “official” symbol universalization. We might call these questions “social” questions, because to the extent that language only occurs between individuals gathered in a mass, having the unique combination of free-play and universal programmatic meaning associated with content, the dictates of individuals does affect the language's use. Naturally, one cannot make a language illegal, or regulate its usage, but the attempts to do so will have an affect of some sort, even if not the intended effect. Social control of a language may not firmly control its meaning (content) or use (form), but it will change it certainly, on both accounts.
So the second social question(s) I would like to raise is, in addition to the effect of the reliance upon the Twitter service to adopt and officially universalize symbols' programmatic use, to what extent are we willing to base the designs of new symbols, and their open-sourced, community-driven conventions, upon a single service in a closed, controlled entity, just so happening to be a private company? I am not so interested in the intellectual property aspects (for the moment), but to develop a microsyntax for such a service is to develop a language that will be, in the end, limited and proprietary. What other services, forms, and media might the development of a microsyntax affect? To what extent should the microsyntax be limited to Twitter? To what extent will the usefulness of a microsyntax be affected by attempts to universalize or localize the language to a particular service? The size and popularity of Twitter seems to make these moot points, to some extent. Clearly a unique syntax is already developing, whether or not it is the best idea. But is “learning to speak Twitter” simply the best idea? Or should the semiotic lessons we learn from exploring microsyntax better applied to a wider range of media than simply a “glorified text message”?
I do not know the answers to any of these questions yet, nor do I really even have any idea of what sort of symbols should be included. Being the amateur semiotician I am, I have a different position to push.
The notion I would like to add to the discussion is a bit abstract, but I believe it is important. I would like to introject the concept of Authorship, for what good it may do (if any).
Authorship used to be the main source of innovative programmatics in language. Naturally, a main source of significatory content as well—but even more important than the stories themselves, were the way they were told. From the time of Homer, the author has held a significant position in language as the programmer, the prime mover, and the service provider. It was with a certain authority, a certain speaking of the subjective “I” transformed into universalized narrative, that an author was able to shape the use of language. Before the days of authors, perhaps group-memorized verbal legends were the original crowd-source.
And we're heading back there again. I'm not going to dignify Twitter-novels with discussion, but I think it is clear to say that unencumbered access to literature via digital technology is becoming more important to its consumption than the identity of the author. I don't think you can crowd-source the writing of a book per say, but you sure can't get anyone to read a book without a little bit of user-generated marketing.
But even though the author's may be a little disappointed at no longer being well-paid (or paid at all) celebrities, they still haven't lost their power over language. They have a poissance, in the “pushing”, or “forcing” sense of the world, as well as the potential. Perhaps they have lost their way a bit, and forgotten the power one can wield with a bit of forceful word-smushing (certainly folks have died for it in the past), but the capacity is still there. Authorship is a firm hand around the pen, or fingers on the keys.
I don't think this bit of figurative nostalgia is unrelated. One doesn't set up a new syntax by writing a white-paper or a blog essay—one does it by going out there and using the syntax. Proposing something is never enough; one has to use it with force, and let the force of the symbols become self-evident. If it is powerful, than it shall be. Language has developed, since the age of authors and perhaps even before, via loud shouts, firmly intended phrases, and elloquent incantations alike. We know there is hate speech, and are wary of its power. What about language with the power to build, or unite? Or simply to communicate with lightening speed—a linguistic Internet in symbols and syntax alone. I was fascinated by Dune as a kid—the idea that the Atriedes had their own battle language, a secret language only used in matters of life and death, bowled me over. No, Twitter is not a battle language. But it is some sort of new language. Perhaps, an Internet Language.
But this is the problem with the Twitter service: it is ultimately reductive, in signification and programmatics. It's that damn 140 character limit—both the source of its semiotic innovation and all of its troubles. By being one of the first popular services to define both an inside and an outside to its content (all others that think of themselves as ever-growing blobs come to look like them too), it chose too small of a box. We need more from our Internet content than 140 characters can ever provide. Therefore the wild expansion is occurring on the outside, and the Twitterverse is becoming a horribly mutated and desolate place.
The truth is, interactions with ulterior apps via programmatics and the API are near worthless. Sure, you can develop some good client apps for writing posts, keeping track of multiple timelines, and searching. But micropayments? GTD lists? Real threaded messages and chats? Media sharing? These are all hopelessly wishful thinking. Just because the service is popular does not mean you can convince everybody, or even a critical mass, to accomplish all their Internet uses through a 140-character window. All of these things exist in “large form” in their own separate interiors, and to try and shrink them into the syntax of Twitter is to squish them too much, and fill up that little 140-character box to the breaking point. This is not to say we have uncovered all that Twitter has to offer—but it is to say that most of the invention is horribly un-programmatically authored.
Twitter's power lies in its communication—in its content, rather than shoving content programmatically through an overloaded API. It delivers small, concise messages, and allows a certain amount of programmatic networking to access this content, in a brilliantly small and simple package. As authors, this is the avenue to develop for Twitter. To push Twitter and see what we can do with its programmatic content, not IPO-in-the-sky payday concepts.
But microsyntax need not begin and end with Twitter. What if we took the approach of the equally accessible interior/exterior, the content/programmatics approach we have found in Twitter, and applied it to other services, or created new services around this semiotic utility? What if rather than force all the exterior into a too-small 140-character interior, we developed an interior simple enough, say like plain text, and developed microsyntax to control the programmatic aspects of the access of this plain text in ways simple enough for any user to wield? What if a service was created not unlike email, that would route plain text on the basis of its plain text? What microsyntax could be added to email systems, for example? What about openly-readable, tagged, searchable email? Why not? Why are wikis constrained to web servers, like shadow-plays of web activity? Why aren't they linked via opt-in, streaming timeline conjunctions? Rather than storing an edit history, a wiki could be the timeline itself, constantly in atemporal motion rather than accumulating on a server. Anybody opting-in would be simultaneously reading and forming the wiki with their programmatically-intended text updates.
Twitter has also opened up the door. It has linked the programmatic with content in a way that appeals to millions of people, and could be argued to have provided real use to these same millions. Now that we, as authors wielding such methods, can see what it is doing to the usefulness of language, we have a new angle in which to push language. What other sorts of programmatic changes can we make to our content, both on Twitter, and in the rest of our linguistic world? Could we develop a microsyntax for every day speech? Certain microsyntax elements leak into speech already. What about long-form Internet writing? What symbols would improve its function, and what html tags would provide better access both inside and outside the text? What should be universalized, and what should get more free-play? Should we develop a taxonomy of tags? A symbol to denote obscure metaphor? The possibilities, and the potential, are near endless.
Predictions for 2012
12 years ago
No comments:
Post a Comment