Categories
main mozilla tech

Pontoon – introduction

One of the three core types of content I described in my previous blog post is what I call rich content.

This type of localizable content has certain characteristics that are very different from the common UI entities. Starting from different goal (to provide information), through different size (long sentences, paragraphs), different l10n flexibility (needs ability to reorder, extend, shrink long texts) to much richer syntax (sentences can be styled with HTML, CSS).

Almost a year ago I started playing around some type of a web tool that would allow for WYSIWYG style localization for the web, later picked up by Ozten in his blog post about potential use cases, and finally landed on Fred Wenzel’s mind as he decided to give this idea a try. He ignited the project, and created first iteration of the UI, where I joined him, and added server side.

Now, the idea itself is neither new nor shocking. Many projects were experimenting with some forms of live localization for web content but the solutions we know so far are hardly adaptable for generic purposes. Some require a lot of prerequisites, others are gettext only, and they come as black-box solution which requires you to follow all the procedures and lock your process in.

Pontoon is different in the sense that it is a toolset that allows for rich content localization with grading amount of prerequisites and varying strictness of output. It’s a very alpha mode tool, so be kind, but I think it’s ready to get the first round of feedback 🙂

Pontoon – components

Pontoon is composed of three elements:

  • client – pontoon client is an application that allows for content navigation and localization. It’s responsible for analyzing content and providing ways for the user to translate it.
  • server – pontoon server is an application that the client communicates with in order to send/update translations provided by the localizer and store them later for the website to use.
  • hook – pontoon hook is a small library that can be hooked into a target application in order to provide additional information for the client to improve its abilities.

There are various clients possible, for now we have two – HTML website as a client, and Jetpack 0.8.1 based one. They share a lot of code and they operate quite similarly.

We have one server – django based one – that can receive translations provided by the client, and uses Silme to store them into a file (currently .po), and we have one hook – for PHP – that adds special meta headers and provides simple API for modified gettext calls that gives client direct information on where the entities are located, so that the client does not have to guess.

I’ll dig into details in a separate post, there’s much more into how Pontoon can operate and what are the possible enhancements for each of the three components, but for now, I’d like to present you a short video of Pontoon used to localize two old versions of our website projects.

The video is a coffee-long, so feel free to grab one and schedule 6 minutes 🙂

urls used in the video:

notes:

  • the locale codes that are supported by the server are usually ab_CD, not ab – for example de_DE, fr_FR etc.
  • pontoon does not support multiple users at the same time, so you may observe strange results when many people will try it at the same time. Enjoy alpha mode!
  • “0.1 alpha” means – I’m looking for feedback, comments, ideas and contributions 🙂

Categories
main mozilla tech

My vision of the future of Mozilla localization environment (part1)

After two parts of my vision of local communities, I’d like to make a sudden shift to write a bit about technical aspects of localization. The reason for that is trivial. The third, and last, part of the social story is the most complex one and requires a lot of thinking to put it right.

In the meantime, I work on several aspects of our l10n environment and I’d like to share with you some of the experiences and hopes around it.

Changes, changes

What I wrote in the social vision, part 1 about how the landscape of Mozilla changes and gets more complex, stands true from localization perspective and requires us to adapt in a similar fashion as it requires local communities.

There are three major shifts that I observe, that makes our approach from the past not sufficient.

  1. User Interfaces become more sophisticated than ever
  2. Product forms are becaming more diversified and new forms of mashups appear that blend web data, UI and content
  3. Different products have different “refresh cycles” in which different amount of content/UI is being replaced

Historically, we used DTD and properties for most of our products. The biggest issue with DTD/properties is that those two formats were never meant to be used for localization. We adapted them, exploitet and extended to match some of our needs, but their limitations are pretty obvious.

In respose to those changes, we spent significant amount of time analyzing and rethinking l10n formats to address the needs of Mozilla today and we came up with three distinct forms of data that requires localization and three technologies that we want to use.

L20n

Our major products like Firefox, Thunderbird, Seamonkey or Firefox Mobile, are becoming more sophisticated. We want to show as little UI as possible, each pixel is sacred. If we decide to take that screen piece from the user, we want to use it to maximum. Small buttons, toolbars should be denser – should present and offer more power, be intuitive and allow user to keep full control over the situation.

That exposes a major challenge to localization. Each message must be precise, clear and natural to the user to minimize his confusion. Strings are becoming more complex, with more data elements influencing them. It’s becoming less common to have plain sentences that are static. It’s becoming more common that a string will show in a tooltip, will have little screen (Mobile) and will depend on the state of other elements(number of open tabs, time, gender of the user).

DTD/properties are absolutely not ready to meet those requirements and the more hacks we implement the harder it’ll be to maintain the product and its localizations. Unfortunately other technologies that we considered, like gettext, XLIFF or QT’s TS file format are sharing most of the limitations and are being actively exploited themselves for years now (like gettext’s msgctxt).

Knowing that, we started thinking about how would localization format/technology look like if we can start it today. From the scratch. Knowing what we know. With experience that we have.

We knew that we would like to solve once and for all the problem with astonishing diversity of languages, linguistic rules, forms, variables. We knew we’d like to build a powerful tool set that would allow localizers to maintain their localizations easier, and localize with more context information (like, where the string will be used) than ever. We knew that we want to simplify the cooperation between developers and localizers. And we knew we would love to make it easy to use for everyone.

Axel Hecht came up with a concept of L20n. A format that shifts several paradigms of software localization by enabling algorithmic power outside of the source code. He’s motto is “Make easy things easy, and complex things possible” and that’s exactly what L20n does.

It doesn’t make sense to try to summarize L20n here, I’ll dig deeper in a separate blog post in this series, but what’s important for the sake of this one, is that L20n is supposed to be a new beginning, different than previous generations of localization formats, differently defining the contract between localizer and developer called “an entity”.

It targets software UI elements, should work in any environment (yes, Python, PHP, Perl too) and allow for building natural sentences with full power of each language without leaking this complexity to other locales or developers themselves. I know, sounds bold, but we’re talking about Pike’s idea, right?

Common Pool

While our major products require more complexity, we’re also getting more new products that appear in Mozilla, and very often they require little UI, because they are meant to be non-interruptive. Their localization entities are plain and simple, short and usually have single definition and translation. The land of extensions is the most prominent example of such approach, but more and more of our products have such needs.

Think of an “OK” and “Cancel” button. In 98% of cases, their translations are the same, no matter where they are used. In 98% of cases, their translations are the same among all products and platforms. On top of that there are three exceptions.

First, sometimes the platform uses different translation of the word. Like MacOS may have different translation of “Cancel” than Windows. It’s very easy, systematic difference shared among all products. It does not make any sense to expose this complexity to each localization case and require preparing each separately for this exception.

Second, sometimes an application is specific enough to use a very specific translation of a given word. Maybe it is a medical application? Low level development tool or for lawyers only? In that case, once again, the difference is easy to catch and there’s a very clear layer on which we should make the switch. Exposing it lower in a stack, for each entity use, does not make sense.

Third, it is possible that a very single use of an entity may require different translation for a given language. That’s an extremely rare case, but legitimate. Once again, it doesn’t make sense to leak this complexity onto others.

Common Pool is addressing exactly this type of localizations. Simple, repetitive entities that are shared among many products. In order to address the exceptions, we’re adding a system of overlays which allow a localizer to specify separate translation on one of the given three levels (possibly more).

L20n and Common Pool are complementing each other and we’d like to make sure that they can be used together depending on the potential complexity of the entity.

Rich Content Localization

The third type is very different from the two above. Mozilla today produces a lot of content that goes way beyond product UI and localization formats are terrible when dealing with such rich content –  sentences, paragraphs, pages of text mixed with some headers and footers that fill all of our websites.

This content is also diversified, SUMO or MDC articles may be translated into a significantly different layout and their source versions are often updated with minor changes that should not invalidate the whole content. On the other hand small event oriented websites like Five Years of Firefox or Browser Choice have different update patterns than project pages like Test Pilot or Drumbeat.

In that case, trying to build this social contract between developers and localizers by wrapping some piece of text into uniquely identifiable objects called entities and using some way to sign them and match translation to source like we do with product UI doesn’t make sense. Localizers need great flexibility, some changes should be populated to localizations automatically, only some should invalidate them.

For this last case, we need very different tools, that are specific for document/web content localization and if you ever tried Verbatim or direct source HTML localization you probably noticed how far it is from an optimal solution.

Is that all?

No. I don’t think so. But those are the three that we identified and we believe we have ideas on how to address them using modern technologies. If you see flaws in this logic, make sure to share your thoughts.

Why I’m writing about this?

Well, I’m lucky enough to be part of L10n-Drivers team in Mozilla, and I happen to be involved in different ways in experiments and projects that are going to address each of those three concepts. It’s exciting to be in a position that allows me to work on that, but I know that we, l10n-drivers, will not be able to make it on our own.

We will need help from the whole Mozilla project. We will need support from people who produce content, create interfaces and who of course from those who localize, from all of you.

This will be a long process, but it gives us a chance to bring localization to the next level and for the first time ever, make computer user interfaces look natural.

In each of the following blog posts I’ll be focusing on one of the above types of localizations and will present you projects that aim at this goal.

Categories
main mozilla tech

My vision of the future of Mozilla local communities (part 2)

In my previous blog post, I summarized the transition through which Mozilla project went and how it applies on how local Mozilla communities. I explicitly mentioned enormous growth of Mozilla ecosystem, diversification of products & projects, and differentiation of project development patterns which results in different requirements for marketing, QA, support, localization etc.

Now, I’d like to expand on how I believe our local communities now operate.

On Local Community Workload

from: http://www.edc.ncl.ac.uk/highlight/rhnovember2006g01.php/

The result of this growth of Mozilla ecosystem is a rise in a workload that our local communities experience.  With this work comes the challenge to communicate to locales the richness of Mozilla on a local ground.  It seems that localizer workload is mounting high and local communities are trying to find ways to adapt, because:

First, it is not scalable to manage all Mozilla localizations by the team of a size that fit the needs 5 years ago.

Second, localizers are not the only type of people that exist in a local community. There are various tasks which require different skills and different people may find different sorts of motivation to work on different aspects of Mozilla.

It’s pretty easy to get out of balance and try to take more than you can handle when there’s so much going on and you feel in charge of your locale. Some communities are more successful finding their way, some are struggling.

I believe we have to adjust our approach to this new reality.

My opinion on the role of l10n-drivers

Traditionally, a lot of the local engagement work has fallen on the plate of localizers.  The L10n-drivers team then becomes very important in helping local communities manage their workload.  Having participated as an l10n-driver for over one year now, I see how the team became crucial in supporting communities in several ways. It:

  • It makes sure that when we call out for localizations, what you localize will be used for a long time to improve maximum work/value balance.
  • Builds tools that reduce the entry barrier and time spent by localizers on localization and local management tasks around
  • Provides information on projects, their roadmaps, goals and results (metrics) to help localizers make informative decisions on what to localize and when.
  • Supports localizers in solving localization blockers like hardcoded strings, or untranslatable strings to make the results of their work worth the time their spent and that if they want, they can fully localize the product and make it look awesome and natural in their language. (read: one untranslatable string ruins hard work and is a great way to demotivate anyone)
  • Helps adjust roadmaps of projects to minimize the overlap between relases to spread the workload in time.

But, the role of local communities has expanded far beyond just localization.  Our team’s work will not be enough and I think that we have to revise assumptions we all make about what is localization process, and what are our goals.

My opinion of the changing role of Localizers

Localization of Mozilla today is not a single, homogeneous task like it used to be. There are different tasks to take and different people who want to contribute. Some tasks require short spikes of attention once per year (around release), other require bi-weekly contribution, other have no release schedules and just take any contribution. It all requires different amount of energy, focus, attention and time.

And the core goal of localization – to bring the product closer to your local ground – is suddenly becoming a complex toolset. With so many projects to choose from, local communities should stop thinking about them as a single bundle. Instead we should all start recognizing that this diversification allows us to pick what we need.
You, local community members, are the best positioned to make the right decision on which projects are needed in your region. We cannot assume that each region needs the same amount of Mozilla ingredients.

By that I mean not only ability to pick up projects to localize for your region, but also deciding, together with the project leaders, how much of the project should be translated, and what kind of adjustments are required for your culture. It’s extremely important to understand, that sometimes you cannot localize everything, although we all know how much satisfaction we have from “collecting it all“. Sometimes “top10” articles gives better result than “try to figure out how to translate it all“. And sometimes you need to go beyond translation. The “top10” SUMO articles in English may be different than “top10” in your locale, and maybe some aspects of marketing campaign could resonate better in your country if you adjust it to your culture and reality.

Armed with this power, local communities can pick the projects that best resonate with what is needed to promote Mozilla vision in their region and put more effort in those. It’s a great power, and a great responsibility, and we have to trust local communities that they know better than any centralized decision making system can ever know, what is important. And we, owners and peers of the projects have to help local communities make the right choices, and fine tune the ingredients they picked. You, local communities, are in charge here.

Local community

With so many tasks, evangelism, marketing, PR, QA, development, support, localization, that are represented in Mozilla, it may be very challenging to fulfill them all by localization team. Many local communities are working on various aspects of Mozilla project, and what’s common to them, is their regional identity and proximity which allows them to support one another, share resources and find new contributors. I believe it’s crucial to preserve the local identity and that there is a great value for each contributor from around the world to peer with other contributors working on other aspects of Mozilla in their region, but localization is not the only task out there.

And more than ever, we need local communities to cooperate with Mozilla project leaders to find new contributors and grow the communities. Generating new project that attract new contributors is one of the key aspects of a healthy, sustainable ecosystem and it’s true for both, Mozilla project as a whole, and for Mozilla local communities.

In the last part, I’ll try to summarize the state change and give you some ideas to consider.

Categories
main mozilla tech

My vision of the future of Mozilla local communities (part 1)

I know, bold title.

Since I decided to start a blogging week, I see no reason not to start with a major topic I have been working on for a few months now.
  The future of local communities in Mozilla is made of two parts – Social and Technical.

I’ll start with the former, and it’s going to be a long one – you know me.

Notice: This is the way *I* see things.  It is not representative of the l10n-drivers, the SUMO team, the QA team, or the marketing team.

But, it represents the progress of thinking about local communities we’re making right now. It is different from what you saw some time ago, and it may change in the future, it does not represent any kind of consensus, and my peers may disagree with me on some of my points.

A little bit of history

Historically, and by that I mean years 2000-2004, when first strong local communities were constituted, it was centered all around localization.  The localization ecosystem had several characteristics.

  • finite number of projects
  • core of any local community were localizers
  • each product had limited number of strings
  • each product had a release cycle not shorter than 1 year
  • we had limited awareness of localization importance among Mozillaians

Another specific thing about that time was that Mozilla as a community/project started growing faster than Mozilla as an organization.  By this, I mean that people started participating in Mozilla all over the world, sometimes faster than the organization could predict, know about, understand and harness.  It was very independent.  What happened in Poland, was very different from what happened in Italy or U.S. or wherever.  At the days when Mozilla was formally organizing, few people at the “central project” could predict what was happening across the world.  At times, it was very frustrating to them…things were happening so fast, beyond the organization’s control.

As a motivated community, the Internet allowed us all to download the early Mozilla products, and gave us something to gather around.  We did and it was amazing.  People started fan sites, discussion forums, and “news-zines”,  The most determined ones started seeking ways to bring Mozilla to their country.  The most natural way to participate was to localize the product, and by localize I mean various actions that make the product more suited for the local market – translating, changing defaults and adding new features or modifying existing ones.

All this work was usually targeted in two directions – toward local markets, where those early community leaders were building local branches of Mozilla, and toward the Mozilla project to fit the concept of local communities, and the fundamental goal of internationalization of a project into the core of our project culture.

Thanks to that work in those days, today we can say that Mozilla is a global project and we recognize localizability as one of the aspects of Mozilla approach to projects.

But since those days, many things has changed. What was good by that time, may not be enough today.

Growth and Variety

Fast-forward to today:  Mozilla today as a meta-project is producing much richer set of projects/products/technologies than we ever did.

We create many websites of various sizes.  We have blends of websites and extensions (like TestPilot).  We have webtools like Bugzilla, addons.mozilla.org.  We have products likeFirefox, Thunderbird, and Seamonkey. We have a mobile product with higher screen space limits.  We have experiments that are introducing new level of complexity for localization like Ubiquity or Raindrop.  We have more content than ever.

The point is this: local communities represent Mozilla through a diverse set of mature products, early prototypes, innovative experiements, one-time marketing initiatives, and documents like our Manifesto that will live forever.  This means that the work flow has changed dramatically since the early days.  Different projects with different or changing frequencis are becoming the standard for communities to absorb in a new differentiated and highly competitive marketplace.  And, our communities need to evolve to respond to this.

Each product has different characteristics and the local delivery through l10n and marketing means a very different type of commitment.  It now requires different amounts of time and energy, different types of motivation, and different resources.

Additionally, we’re also more diversified in the quest to fulfill our mission. We have regions where modern web browsers constitute vast majority of the market share, where governments, users and media understand the importance of browser choice or privacy and Internet is a place where innovation happens. But, we also have places where it is not the case. Where incumbent browsers are still the majority, where the web will not move forward in the same way it did in the past, where the latest technologies cannot be used, where privacy, and openness sounds artificial.

Recognizing this shift is important factor to allow us to adjust to the new reality where local communities have to expand beyond just localization.  They must become local Mozilla representatives who are experienced in evangelism, marketing, localization, software development, and all other aspects of Mozilla.  We need to get more local and grow beyond the responsibilities of our local communities in the past.

In the next part, I’ll cover some ideas for the future…

Categories
main mozilla tech

In MtV – blogging week

I delayed it way too long and now feel that I need to catch up with a lot of stuff.

So, since I just got to MtV where I’ll spend some time now, I decided to organize personal blogging week where each day I’ll blog about piece of what I’m working on to hopefully catch up with the projects I failed to blog about lately 🙂

On the plate we have, jetpack stuff, various dimensions of l20n, pontoon, survey project, and, for the weekend, some non-mozilla projects as well 🙂

If you’re in bay area and want to share a drink, coffee or socialize in any other way, let me know. And if you’re at 300 Castro, I claimed ownership over a desk next to Seth and Asa. It’s a bit busy here, but I like networking :]

Categories
main tech

Reading list for fellow Warsaw TEDxers

Lori and Noam asked me to share some books to read that could get them deeper into the rabbit hole. Here we go:

There is also Mozilla Library with a lot of slide decks on Mozilla project.

And two more decks on Mozilla:

Hope that’s a good start 🙂

Categories
main tech

TEDxWarsaw slides

Since quite a number of people asked for it, here go my slides:

Creative Commons License
Government hackability by Zbigniew Braniecki is licensed under a Creative Commons Attribution 3.0 Poland License.

Oh, and btw. if you like my slides from TEDx, I think you will like my slides from eLiberatica 09. And if you’ll learn sth about Mozilla while reading them, I won. 🙂

Categories
main po polsku tech

slowni.pl

W nawiązaniu do dwóch poprzednich postów, dodaję trzeci.

Słowni.pl, to projekt oparty o aplikacje z OpenPolitics pozwalający zbierać deklarację kandydatów w wyborach a następnie analizować i rozliczać je gdy dana osoba te wybory wygra.

Aplikacja jest jeszcze świeża, ale testuję w niej możliwość zakładania kont, budowania reputacji oraz moderacji.

Chciałbym dopracować ją w ciągu najbliższych kilku miesięcy i uruchomić kiedy Państwowa Komisja Wyborcza udostępni listę kandydatów.

Ważnym elementem systemu będzie odsiewanie deklaracji nieweryfikowalnych oraz skupienie się na wiarygodności deklaracji i NPOV. Na razie myślę o tym, by do każdej deklaracji trzeba było dołączyć dwa źródła oraz opis testu weryfikowalności, który jest ograniczony w czasie. Dzięki temu nie znajdą się w serwisie deklaracje typu “Będzie lepiej”, tylko takie, które mogą być po wyborach zweryfikowane.

Nie wiem jeszcze czy reputacja to dobry pomysł, ale nie kosztuje więc dodałem. Każdy użytkownik ma określoną reputację startową i zależnie od swoich akcji (na razie może tylko dodać deklarację lub zaktualizowac ją, ale w przyszłości będzie mógł np. zgłosić błędne dane) może ona rosnąć.

Na razie korzystam też z modułu rejestracji, ale rozważam przejście na OpenID jako jedyny sposób tworzenia konta, aby uniknąć kolekcjonowania danych i haseł.

Zapraszam do zabawy i testowania. Testowa projekt wygląda tak.

Jeśli ktoś chce, może też zaitalować u siebie. Instaluje się to bardzo podobnie do reszty aplikacji z pakietu openpolitics. 🙂

Categories
main po polsku tech

openpolitics i gov20.pl

W nawiązaniu do rozważań z poprzedniego wpisu, ruszam dziś z projektem o kodowej nazwie OpenPolitics.

OpenPolitics to zestaw aplikacji napisanych w django które pozwalają agregować dane rządowe i udostępniać je w formie dostępnej dla użytkownika oraz dla komputerów.

To pierwsze jest mniej ważne, ale to drugie stanowi fundament projektu. Chciałbym umożliwić ludziom pisanie aplikacji, które będą korzystały z publicznie dostępnych danych rządowych, które dziś są trudno dostępne i nieosiągalne dla programów komputerowych. Paradoksalnie brak możliwości pobrania danych powstrzymuje wielu moich znajomych od pisania aplikacji wspierających budowę społeczeństwa obywatelskiego i sprawia, że idą pisać aplikacje, do których dane są dostępne. Ot, na przykład kolejnego klienta tweetera, gry albo mashup do grafiki.

Jeśli podoba nam się jak wiele aplikacji powstaje i jak szybko aktywizują one ludzi w dziedzinach takich jak rozszerzenia Firefoksa, aplikacje do iPhone czy Androida, systemy analizy danych Facebooka czy Tweetera, to musimy zrozumieć, że podstawa tego ekosystemu jest dostęp do danych i API, które pozwala operować na nich.

Mój projekt ma na celu zbudowanie interfejsu między światem zakorzenionym w pięknym, XXwiecznym modelu demokracji, a cyberspołeczeństwem. Myślę o nim jako o takim odpowiedniku OCRu pozwalającego skorzystać z wartościowych danych zapisanych w przedpotopowych systemach.

W największym skrócie projekt ma pozwolić na pisanie wszelkiego rodzaju aplikacji operujących na danych takich jak:

  • Kto jest dziś premierem Polski?
  • Jaki jest adres email do marszałka sejmu?
  • Ilu jest senatorów PiS?
  • Kiedy odbyło się ostatnie posiedzenie sejmu?
  • Co zmieniło się miedzy dwiema wersjami projektu ustawy?
  • Jak głosował wybrany przeze mnie poseł przez ostatnie pół roku?
  • Kto prowadził ostatnie obrady sejmu?
  • Ilu doradców ma Prezydent Polski i jakie są ich emaile?

To tylko kilka pytań, na które odpowiedzi dziś można znaleźć, ale gdybyśmy chcieli napisać aplikację która korzysta z tych danych, przetwarza je, lub prezentuje w wybranej przez siebie formie, musielibyśmy… no coż… mozolnie regexpować się przez strony rządowe. OpenPolitics to właśnie robi za nas i udostępnia w miarę przejrzyste API do pobierania takich danych.

Całość jest otwarta, można postawić sobie swoje instancje, udoskonalać, pomóc mi w rozwoju i dopasować do swoich potrzeb.

Wraz z wydaniem wersji 0.1 OpenPolitics udostępniłem instalację tej wersji aplikacji pod adresem http://www.gov20.pl oraz http://api.gov20.pl do testowania i zabawy nowymi możliwościami 🙂

Odpowiedzi na trochę pytań umieściłem w FAQ,

Coś jeszcze? Jutro, na TEDx Warsaw będę miał przyjemność mówić trochę więcej o styku technologii i polityki. Temat mojego wystąpienia “Government Hackability”, zaczynam o 17:11, transmisja będzie chyba na żywo 🙂

Miłego hackowania, a jeśli chcielibyście pomóc, to zebrałem listę JuniorJobs.

Categories
main po polsku tech

Otwieranie polityki metodą DYI

Jestem w takim wieku, że mam jeszcze marzenia. Marzy mi się dużo i szybko.

Jedną z dziedzin w której marzeń mam szczególnie dużo jest bardzo zaniedbany styk infromatyki i polityki. Od blisko dziesięciu lat biorę udział, lub obserwuję rozwój, ogromnej liczby projektów które stawiają sobie fantastyczne cele.

Wikipedia ze swoją misją kolekcjonowania wiedzy świata, Ubuntu ze swoim pragnieniem stworzenia Linuksa dla ludzi, czy wreszcie najbliższa mi, Mozilla ze swoimi ideałami Otwartego Internetu. Wszyscy którzy pracują w tych i setkach innych projektów wyrobili sobie pewne nawyki, pewne know-how. Budujemy potężne warstwy narzędzie pozwalające nam realizować nasze cele w spcyficznych warunkach Internetu.

Z drugiej strony obserwuję, jak chyba każdy, politykę, tę która coraz bardziej oddala się od “rzeczywistości” w której żyję. Politykę mówiącą językiem moich rodziców, rozwiązującą problemy które coraz mniej mnie obchodzą, w sposób, który wydaje mi się co najmniej nieefektywny.

W tym samym czasie niezwykle szybko rozwija się całe społeczeństwo, które ten świat, w którym obraca się nasza polityka, po prostu ignoruje. Wyzwania, problemy, przeszkody i metody działań jakie podejmujemy by rozwijac Wikipedię, by komunikować się efektywnie przez Facebooka, by wspólnie tworzyć dokumenty w Etherpadzie czy tworzyć filmy amatorskie to świat który rozwija się i zmienia tak szybko, że zaskakuje mnie jedynie jak bardzo z tej perspektywy przestrzeń dyskusji publicznej o polityce zaczyna wyglądać jakoś tak niepoważnie.

I teraz w tej właśnie rzeczywistości, gdy przestrzeń Internetu rozrasta się i staje się ważną częścią życia ludzi, polityka zaczyna niezdarnie próbować podejść do tego, dziwnego dla nich tworu. Jakiś polityk zacznie blogować by napisać coś na nim dwa razy na próbę i się zrazić, inny założy tweetera, na którym opisze swoje śniadanie. Powstają liczne komisje do spraw Internetu i Jego Przejawów, a czasem nawet państwo przerazi się anarchią jaka tam panuje i postanowi ochronić obywateli poprzez ustawę czegoś zakazującą, kompletnie nie rozumiejąc, że Internet nauczył się sam rozwiązywać swoje problemy i młodzi ludzie, wychowani w poczuciu fal WiFi latających wokół nich od dziecka postrzegają takie ruchy jakby obserwowali słonia w składzie porcelany, niezdarnego, głupiutkiego i zbyt powolnego by reagować i dopasować się.

Z drugiej zaś strony mamy coś co można by nazwać “Web Approach”. Swiat gdzie możliwości technologiczne pojawiające się co pół roku są tak przełomowe, że wszelkie rozwiązania wcześniejsze tracą sens. Swiat gdzie wszystko sie da i jest tylko kwestią czasu i zdolności włożonych w oprogramowanie danego rozwiązania. Gdzie projekty tworzą się samoczynnie by reagować na pojawiające się wyzwania.

To świat w którym transparentność jest wbudowana w DNA ekosystemu, w którym zmienność jest jedyną mierzalną stałą, w którym prawo jest generowane lokalnie – per serwis – i dopasowywane w reakcji na zmiany w ciągu dni, albo tygodni.

Dynamika tego świata, jego fundamentalna odmienność wymaga przemiany pokoleniowej. Polityka nie jest na to gotowa i nakłada znane sobie mapy mentalne na zjawiska diametralnie odmienne od wszystkiego co znali wcześniej.

Efektem jest to co możemy obserwować w dziedzinach “konsultacji” z Internautami, oraz w dziedzinie aplikacji pisanych przez państwo.  Spotkanie z Premierem było uroczym przykładem rozmowy w dwóch językach, propozycje Kancelarii Premiera, aby wybrać “przedstawicieli Internautów” jest kolejnym, różne wypowiedzi polityków w stylu “te dane ujawniliśmy tylko w polskim Internecie” to perełki.
Ostatnio na jakiejś konwencji któryś z ministrów chwalił się wnioskiem, że państwo musi się zelektronizować i korzystać z maili… Nie muszę chyba tłumaczyć jak absurdalne jest to w świetle poważnych dyskusji toczonych od dłuższego czasu o tym, że email przestaje być ważny.

Wszystko to jest w najlepszym razie przestarzałe, w najgorszym zaś jest egzemplifikacją opisanej powyżej przepaści technologiczno-społecznej.

Z drugiej strony Państwo jest i pozostanie ważne. I może, a właściwie powinno być, potężnym narzędziem jaki obywatele posiadają. Technologia może zaś dać nam potężną broń, umożliwić kontrolowanie rządzących, a z drugiej strony zdynamizować procesy poprzez zbudowanie platformy dialogu i współpracy w miejscu, w którym obywatele już są – w Internecie, przy użyciu mechanizmów które już znają.

To nie jest kwesita “czy” tylko “kiedy”. Firmy takie jak Dell czy Google nie mają problemu z uzyskaniem feedbacku od całego świata i nie potrzebują w tym celu spotykać się z XX-wiecznym archaizmem – “przedstawicielami Internautów”. Swiat polityki czeka w najbliższych latach bolesna i długa nauka nowej rzeczywistości, w której wyborcy żyją.

A w międzyczasie my nadal będziemy rozwiązywać sami swoje problemy… oddolnie, organicznie, ewolucyjnie…  problemy takie jak… jak napisać rozszerzenie do Firefoksa które powie mi kto jest dziś premierem Polski, albo skąd wziąć listę emaili do posłów PO?

Na te i inne pytania odpowie następny wpis 🙂