main mozilla tech

Pontoon – introduction

One of the three core types of content I described in my previous blog post is what I call rich content.

This type of localizable content has certain characteristics that are very different from the common UI entities. Starting from different goal (to provide information), through different size (long sentences, paragraphs), different l10n flexibility (needs ability to reorder, extend, shrink long texts) to much richer syntax (sentences can be styled with HTML, CSS).

Almost a year ago I started playing around some type of a web tool that would allow for WYSIWYG style localization for the web, later picked up by Ozten in his blog post about potential use cases, and finally landed on Fred Wenzel’s mind as he decided to give this idea a try. He ignited the project, and created first iteration of the UI, where I joined him, and added server side.

Now, the idea itself is neither new nor shocking. Many projects were experimenting with some forms of live localization for web content but the solutions we know so far are hardly adaptable for generic purposes. Some require a lot of prerequisites, others are gettext only, and they come as black-box solution which requires you to follow all the procedures and lock your process in.

Pontoon is different in the sense that it is a toolset that allows for rich content localization with grading amount of prerequisites and varying strictness of output. It’s a very alpha mode tool, so be kind, but I think it’s ready to get the first round of feedback ­čÖé

Pontoon – components

Pontoon is composed of three elements:

  • client – pontoon client is an application that allows for content navigation and localization. It’s responsible for analyzing content and providing ways for the user to translate it.
  • server – pontoon server is an application that the client communicates with in order to send/update translations provided by the localizer and store them later for the website to use.
  • hook – pontoon hook is a small library that can be hooked into a target application in order to provide additional information for the client to improve its abilities.

There are various clients possible, for now we have two – HTML website as a client, and Jetpack 0.8.1 based one. They share a lot of code and they operate quite similarly.

We have one server – django based one – that can receive translations provided by the client, and uses Silme to store them into a file (currently .po), and we have one hook – for PHP – that adds special meta headers and provides simple API for modified gettext calls that gives client direct information on where the entities are located, so that the client does not have to guess.

I’ll dig into details in a separate post, there’s much more into how Pontoon can operate and what are the possible enhancements for each of the three components, but for now, I’d like to present you a short video of Pontoon used to localize two old versions of our website projects.

The video is a coffee-long, so feel free to grab one and schedule 6 minutes ­čÖé

urls used in the video:


  • the locale codes that are supported by the server are usually ab_CD, not ab – for example de_DE, fr_FR etc.
  • pontoon does not support multiple users at the same time, so you may observe strange results when many people will try it at the same time. Enjoy alpha mode!
  • “0.1 alpha” means – I’m looking for feedback, comments, ideas and contributions ­čÖé

main mozilla tech

My vision of the future of Mozilla localization environment (part1)

After two parts of my vision of local communities, I’d like to make a sudden shift to write a bit about technical aspects of localization. The reason for that is trivial. The third, and last, part of the social story is the most complex one and requires a lot of thinking to put it right.

In the meantime, I work on several aspects of our l10n environment and I’d like to share with you some of the experiences and hopes around it.

Changes, changes

What I wrote in the social vision, part 1 about how the landscape of Mozilla changes and gets more complex, stands true from localization perspective and requires us to adapt in a similar fashion as it requires local communities.

There are three major shifts that I observe, that makes our approach from the past not sufficient.

  1. User Interfaces become more sophisticated than ever
  2. Product forms are becaming more diversified and new forms of mashups appear that blend web data, UI and content
  3. Different products have different “refresh cycles” in which different amount of content/UI is being replaced

Historically, we used DTD and properties for most of our products. The biggest issue with DTD/properties is that those two formats were never meant to be used for localization. We adapted them, exploitet and extended to match some of our needs, but their limitations are pretty obvious.

In respose to those changes, we spent significant amount of time analyzing and rethinking l10n formats to address the needs of Mozilla today and we came up with three distinct forms of data that requires localization and three technologies that we want to use.


Our major products like Firefox, Thunderbird, Seamonkey or Firefox Mobile, are becoming more sophisticated. We want to show as little UI as possible, each pixel is sacred. If we decide to take that screen piece from the user, we want to use it to maximum. Small buttons, toolbars should be denser – should present and offer more power, be intuitive and allow user to keep full control over the situation.

That exposes a major challenge to localization. Each message must be precise, clear and natural to the user to minimize his confusion. Strings are becoming more complex, with more data elements influencing them. It’s becoming less common to have plain sentences that are static. It’s becoming more common that a string will show in a tooltip, will have little screen (Mobile) and will depend on the state of other elements(number of open tabs, time, gender of the user).

DTD/properties are absolutely not ready to meet those requirements and the more hacks we implement the harder it’ll be to maintain the product and its localizations. Unfortunately other technologies that we considered, like gettext, XLIFF or QT’s TS file format are sharing most of the limitations and are being actively exploited themselves for years now (like gettext’s msgctxt).

Knowing that, we started thinking about how would localization format/technology look like if we can start it today. From the scratch. Knowing what we know. With experience that we have.

We knew that we would like to solve once and for all the problem with astonishing diversity of languages, linguistic rules, forms, variables. We knew we’d like to build a powerful tool set that would allow localizers to maintain their localizations easier, and localize with more context information (like, where the string will be used) than ever. We knew that we want to simplify the cooperation between developers and localizers. And we knew we would love to make it easy to use for everyone.

Axel Hecht came up with a concept of L20n. A format that shifts several paradigms of software localization by enabling algorithmic power outside of the source code. He’s motto is “Make easy things easy, and complex things possible” and that’s exactly what L20n does.

It doesn’t make sense to try to summarize L20n here, I’ll dig deeper in a separate blog post in this series, but what’s important for the sake of this one, is that L20n is supposed to be a new beginning, different than previous generations of localization formats, differently defining the contract between localizer and developer called “an entity”.

It targets software UI elements, should work in any environment (yes, Python, PHP, Perl too) and allow for building natural sentences with full power of each language without leaking this complexity to other locales or developers themselves. I know, sounds bold, but we’re talking about Pike’s idea, right?

Common Pool

While our major products require more complexity, we’re also getting more new products that appear in Mozilla, and very often they require little UI, because they are meant to be non-interruptive. Their localization entities are plain and simple, short and usually have single definition and translation. The land of extensions is the most prominent example of such approach, but more and more of our products have such needs.

Think of an “OK” and “Cancel” button. In 98% of cases, their translations are the same, no matter where they are used. In 98% of cases, their translations are the same among all products and platforms. On top of that there are three exceptions.

First, sometimes the platform uses different translation of the word. Like MacOS may have different translation of “Cancel” than Windows. It’s very easy, systematic difference shared among all products. It does not make any sense to expose this complexity to each localization case and require preparing each separately for this exception.

Second, sometimes an application is specific enough to use a very specific translation of a given word. Maybe it is a medical application? Low level development tool or for lawyers only? In that case, once again, the difference is easy to catch and there’s a very clear layer on which we should make the switch. Exposing it lower in a stack, for each entity use, does not make sense.

Third, it is possible that a very single use of an entity may require different translation for a given language. That’s an extremely rare case, but legitimate. Once again, it doesn’t make sense to leak this complexity onto others.

Common Pool is addressing exactly this type of localizations. Simple, repetitive entities that are shared among many products. In order to address the exceptions, we’re adding a system of overlays which allow a localizer to specify separate translation on one of the given three levels (possibly more).

L20n and Common Pool are complementing each other and we’d like to make sure that they can be used together depending on the potential complexity of the entity.

Rich Content Localization

The third type is very different from the two above. Mozilla today produces a lot of content that goes way beyond product UI and localization formats are terrible when dealing with such rich content –┬á sentences, paragraphs, pages of text mixed with some headers and footers that fill all of our websites.

This content is also diversified, SUMO or MDC articles may be translated into a significantly different layout and their source versions are often updated with minor changes that should not invalidate the whole content. On the other hand small event oriented websites like Five Years of Firefox or Browser Choice have different update patterns than project pages like Test Pilot or Drumbeat.

In that case, trying to build this social contract between developers and localizers by wrapping some piece of text into uniquely identifiable objects called entities and using some way to sign them and match translation to source like we do with product UI doesn’t make sense. Localizers need great flexibility, some changes should be populated to localizations automatically, only some should invalidate them.

For this last case, we need very different tools, that are specific for document/web content localization and if you ever tried Verbatim or direct source HTML localization you probably noticed how far it is from an optimal solution.

Is that all?

No. I don’t think so. But those are the three that we identified and we believe we have ideas on how to address them using modern technologies. If you see flaws in this logic, make sure to share your thoughts.

Why I’m writing about this?

Well, I’m lucky enough to be part of L10n-Drivers team in Mozilla, and I happen to be involved in different ways in experiments and projects that are going to address each of those three concepts. It’s exciting to be in a position that allows me to work on that, but I know that we, l10n-drivers, will not be able to make it on our own.

We will need help from the whole Mozilla project. We will need support from people who produce content, create interfaces and who of course from those who localize, from all of you.

This will be a long process, but it gives us a chance to bring localization to the next level and for the first time ever, make computer user interfaces look natural.

In each of the following blog posts I’ll be focusing on one of the above types of localizations and will present you projects that aim at this goal.

main mozilla tech

My vision of the future of Mozilla local communities (part 2)

In my previous blog post, I summarized the transition through which Mozilla project went and how it applies on how local Mozilla communities. I explicitly mentioned enormous growth of Mozilla ecosystem, diversification of products & projects, and differentiation of project development patterns which results in different requirements for marketing, QA, support, localization etc.

Now, I’d like to expand on how I believe our local communities now operate.

On Local Community Workload


The result of this growth of Mozilla ecosystem is a rise in a workload that our local communities experience.  With this work comes the challenge to communicate to locales the richness of Mozilla on a local ground.  It seems that localizer workload is mounting high and local communities are trying to find ways to adapt, because:

First, it is not scalable to manage all Mozilla localizations by the team of a size that fit the needs 5 years ago.

Second, localizers are not the only type of people that exist in a local community. There are various tasks which require different skills and different people may find different sorts of motivation to work on different aspects of Mozilla.

It’s pretty easy to get out of balance and try to take more than you can handle when there’s so much going on and you feel in charge of your locale. Some communities are more successful finding their way, some are struggling.

I believe we have to adjust our approach to this new reality.

My opinion on the role of l10n-drivers

Traditionally, a lot of the local engagement work has fallen on the plate of localizers.  The L10n-drivers team then becomes very important in helping local communities manage their workload.  Having participated as an l10n-driver for over one year now, I see how the team became crucial in supporting communities in several ways. It:

  • It makes sure that when we call out for localizations, what you localize will be used for a long time to improve maximum work/value balance.
  • Builds tools that reduce the entry barrier and time spent by localizers on localization and local management tasks around
  • Provides information on projects, their roadmaps, goals and results (metrics) to help localizers make informative decisions on what to localize and when.
  • Supports localizers in solving localization blockers like hardcoded strings, or untranslatable strings to make the results of their work worth the time their spent and that if they want, they can fully localize the product and make it look awesome and natural in their language. (read: one untranslatable string ruins hard work and is a great way to demotivate anyone)
  • Helps adjust roadmaps of projects to minimize the overlap between relases to spread the workload in time.

But, the role of local communities has expanded far beyond just localization.┬á Our team’s work will not be enough and I think that we have to revise assumptions we all make about what is localization process, and what are our goals.

My opinion of the changing role of Localizers

Localization of Mozilla today is not a single, homogeneous task like it used to be. There are different tasks to take and different people who want to contribute. Some tasks require short spikes of attention once per year (around release), other require bi-weekly contribution, other have no release schedules and just take any contribution. It all requires different amount of energy, focus, attention and time.

And the core goal of localization – to bring the product closer to your local ground – is suddenly becoming a complex toolset. With so many projects to choose from, local communities should stop thinking about them as a single bundle. Instead we should all start recognizing that this diversification allows us to pick what we need.
You, local community members, are the best positioned to make the right decision on which projects are needed in your region. We cannot assume that each region needs the same amount of Mozilla ingredients.

By that I mean not only ability to pick up projects to localize for your region, but also deciding, together with the project leaders, how much of the project should be translated, and what kind of adjustments are required for your culture. It’s extremely important to understand, that sometimes you cannot localize everything, although we all know how much satisfaction we have from “collecting it all“. Sometimes “top10” articles gives better result than “try to figure out how to translate it all“. And sometimes you need to go beyond translation. The “top10” SUMO articles in English may be different than “top10” in your locale, and maybe some aspects of marketing campaign could resonate better in your country if you adjust it to your culture and reality.

Armed with this power, local communities can pick the projects that best resonate with what is needed to promote Mozilla vision in their region and put more effort in those. It’s a great power, and a great responsibility, and we have to trust local communities that they know better than any centralized decision making system can ever know, what is important. And we, owners and peers of the projects have to help local communities make the right choices, and fine tune the ingredients they picked. You, local communities, are in charge here.

Local community

With so many tasks, evangelism, marketing, PR, QA, development, support, localization, that are represented in Mozilla, it may be very challenging to fulfill them all by localization team. Many local communities are working on various aspects of Mozilla project, and what’s common to them, is their regional identity and proximity which allows them to support one another, share resources and find new contributors. I believe it’s crucial to preserve the local identity and that there is a great value for each contributor from around the world to peer with other contributors working on other aspects of Mozilla in their region, but localization is not the only task out there.

And more than ever, we need local communities to cooperate with Mozilla project leaders to find new contributors and grow the communities. Generating new project that attract new contributors is one of the key aspects of a healthy, sustainable ecosystem and it’s true for both, Mozilla project as a whole, and for Mozilla local communities.

In the last part, I’ll try to summarize the state change and give you some ideas to consider.

main mozilla tech

My vision of the future of Mozilla local communities (part 1)

I know, bold title.

Since I decided to start a blogging week, I see no reason not to start with a major topic I have been working on for a few months now.ÔÇĘ┬á The future of local communities in Mozilla is made of two parts – Social and Technical.

I’ll start with the former, and it’s going to be a long one – you know me.

Notice: This is the way *I* see things.  It is not representative of the l10n-drivers, the SUMO team, the QA team, or the marketing team.

But, it represents the progress of thinking about local communities we’re making right now. It is different from what you saw some time ago, and it may change in the future, it does not represent any kind of consensus, and my peers may disagree with me on some of my points.

A little bit of history

Historically, and by that I mean years 2000-2004, when first strong local communities were constituted, it was centered all around localization.  The localization ecosystem had several characteristics.

  • finite number of projects
  • core of any local community were localizers
  • each product had limited number of strings
  • each product had a release cycle not shorter than 1 year
  • we had limited awareness of localization importance among Mozillaians

Another specific thing about that time was that Mozilla as a community/project started growing faster than Mozilla as an organization.┬á By this, I mean that people started participating in Mozilla all over the world, sometimes faster than the organization could predict, know about, understand and harness.┬á It was very independent.┬á What happened in Poland, was very different from what happened in Italy or U.S. or wherever.┬á At the days when Mozilla was formally organizing, few people at the “central project” could predict what was happening across the world.┬á At times, it was very frustrating to them…things were happening so fast, beyond the organization’s control.

As a motivated community, the Internet allowed us all to download the early Mozilla products, and gave us something to gather around.┬á We did and it was amazing.┬á People started fan sites, discussion forums, and “news-zines”,┬á The most determined ones started seeking ways to bring Mozilla to their country.┬á The most natural way to participate was to localize the product, and by localize I mean various actions that make the product more suited for the local market – translating, changing defaults and adding new features or modifying existing ones.

All this work was usually targeted in two directions – toward local markets, where those early community leaders were building local branches of Mozilla, and toward the Mozilla project to fit the concept of local communities, and the fundamental goal of internationalization of a project into the core of our project culture.

Thanks to that work in those days, today we can say that Mozilla is a global project and we recognize localizability as one of the aspects of Mozilla approach to projects.

But since those days, many things has changed. What was good by that time, may not be enough today.

Growth and Variety

Fast-forward to today:  Mozilla today as a meta-project is producing much richer set of projects/products/technologies than we ever did.

We create many websites of various sizes.  We have blends of websites and extensions (like TestPilot).  We have webtools like Bugzilla,  We have products likeFirefox, Thunderbird, and Seamonkey. We have a mobile product with higher screen space limits.  We have experiments that are introducing new level of complexity for localization like Ubiquity or Raindrop.  We have more content than ever.

The point is this: local communities represent Mozilla through a diverse set of mature products, early prototypes, innovative experiements, one-time marketing initiatives, and documents like our Manifesto that will live forever.  This means that the work flow has changed dramatically since the early days.  Different projects with different or changing frequencis are becoming the standard for communities to absorb in a new differentiated and highly competitive marketplace.  And, our communities need to evolve to respond to this.

Each product has different characteristics and the local delivery through l10n and marketing means a very different type of commitment.  It now requires different amounts of time and energy, different types of motivation, and different resources.

Additionally, we’re also more diversified in the quest to fulfill our mission. We have regions where modern web browsers constitute vast majority of the market share, where governments, users and media understand the importance of browser choice or privacy and Internet is a place where innovation happens. But, we also have places where it is not the case. Where incumbent browsers are still the majority, where the web will not move forward in the same way it did in the past, where the latest technologies cannot be used, where privacy, and openness sounds artificial.

Recognizing this shift is important factor to allow us to adjust to the new reality where local communities have to expand beyond just localization.  They must become local Mozilla representatives who are experienced in evangelism, marketing, localization, software development, and all other aspects of Mozilla.  We need to get more local and grow beyond the responsibilities of our local communities in the past.

In the next part, I’ll cover some ideas for the future…

main mozilla tech

In MtV – blogging week

I delayed it way too long and now feel that I need to catch up with a lot of stuff.

So, since I just got to MtV where I’ll spend some time now, I decided to organize personal blogging week where each day I’ll blog about piece of what I’m working on to hopefully catch up with the projects I failed to blog about lately ­čÖé

On the plate we have, jetpack stuff, various dimensions of l20n, pontoon, survey project, and, for the weekend, some non-mozilla projects as well ­čÖé

If you’re in bay area and want to share a drink, coffee or socialize in any other way, let me know. And if you’re at 300 Castro, I claimed ownership over a desk next to Seth and Asa. It’s a bit busy here, but I like networking :]

main tech

Reading list for fellow Warsaw TEDxers

Lori and Noam asked me to share some books to read that could get them deeper into the rabbit hole. Here we go:

There is also Mozilla Library with a lot of slide decks on Mozilla project.

And two more decks on Mozilla:

Hope that’s a good start ­čÖé

main tech

TEDxWarsaw slides

Since quite a number of people asked for it, here go my slides:

Creative Commons License
Government hackability by Zbigniew Braniecki is licensed under a Creative Commons Attribution 3.0 Poland License.

Oh, and btw. if you like my slides from TEDx, I think you will like my slides from eLiberatica 09. And if you’ll learn sth about Mozilla while reading them, I won. ­čÖé

main po polsku tech

W nawi─ůzaniu do dw├│ch poprzednich post├│w, dodaj─Ö trzeci.

S┼é, to projekt oparty o aplikacje z OpenPolitics pozwalaj─ůcy zbiera─ç deklaracj─Ö kandydat├│w w wyborach a nast─Öpnie analizowa─ç i rozlicza─ç je gdy dana osoba te wybory wygra.

Aplikacja jest jeszcze ┼Ťwie┼╝a, ale testuj─Ö w niej mo┼╝liwo┼Ť─ç zak┼éadania kont, budowania reputacji oraz moderacji.

Chcia┼ébym dopracowa─ç j─ů w ci─ůgu najbli┼╝szych kilku miesi─Öcy i uruchomi─ç kiedy Pa┼ästwowa Komisja Wyborcza udost─Öpni list─Ö kandydat├│w.

Wa┼╝nym elementem systemu b─Ödzie odsiewanie deklaracji nieweryfikowalnych oraz skupienie si─Ö na wiarygodno┼Ťci deklaracji i NPOV. Na razie my┼Ťl─Ö o tym, by do ka┼╝dej deklaracji trzeba by┼éo do┼é─ůczy─ç dwa ┼║r├│d┼éa oraz opis testu weryfikowalno┼Ťci, kt├│ry jest ograniczony w czasie. Dzi─Öki temu nie znajd─ů si─Ö w serwisie deklaracje typu “B─Ödzie lepiej”, tylko takie, kt├│re mog─ů by─ç po wyborach zweryfikowane.

Nie wiem jeszcze czy reputacja to dobry pomys┼é, ale nie kosztuje wi─Öc doda┼éem. Ka┼╝dy u┼╝ytkownik ma okre┼Ťlon─ů reputacj─Ö startow─ů i zale┼╝nie od swoich akcji (na razie mo┼╝e tylko doda─ç deklaracj─Ö lub zaktualizowac j─ů, ale w przysz┼éo┼Ťci b─Ödzie m├│g┼é np. zg┼éosi─ç b┼é─Ödne dane) mo┼╝e ona rosn─ů─ç.

Na razie korzystam te┼╝ z modu┼éu rejestracji, ale rozwa┼╝am przej┼Ťcie na OpenID jako jedyny spos├│b tworzenia konta, aby unikn─ů─ç kolekcjonowania danych i hase┼é.

Zapraszam do zabawy i testowania. Testowa projekt wygl─ůda tak.

Je┼Ťli kto┼Ť chce, mo┼╝e te┼╝ zaitalowa─ç u siebie. Instaluje si─Ö to bardzo podobnie do reszty aplikacji z pakietu openpolitics. ­čÖé

main po polsku tech

openpolitics i

W nawi─ůzaniu do rozwa┼╝a┼ä z poprzedniego wpisu, ruszam dzi┼Ť z projektem o kodowej nazwie OpenPolitics.

OpenPolitics to zestaw aplikacji napisanych w django kt├│re pozwalaj─ů agregowa─ç dane rz─ůdowe i udost─Öpnia─ç je w formie dost─Öpnej dla u┼╝ytkownika oraz dla komputer├│w.

To pierwsze jest mniej wa┼╝ne, ale to drugie stanowi fundament projektu. Chcia┼ébym umo┼╝liwi─ç ludziom pisanie aplikacji, kt├│re b─Öd─ů korzysta┼éy z publicznie dost─Öpnych danych rz─ůdowych, kt├│re dzi┼Ť s─ů trudno dost─Öpne i nieosi─ůgalne dla program├│w komputerowych. Paradoksalnie brak mo┼╝liwo┼Ťci pobrania danych powstrzymuje wielu moich znajomych od pisania aplikacji wspieraj─ůcych budow─Ö spo┼éecze┼ästwa obywatelskiego i sprawia, ┼╝e id─ů pisa─ç aplikacje, do kt├│rych dane s─ů dost─Öpne. Ot, na przyk┼éad kolejnego klienta tweetera, gry albo mashup do grafiki.

Je┼Ťli podoba nam si─Ö jak wiele aplikacji powstaje i jak szybko aktywizuj─ů one ludzi w dziedzinach takich jak rozszerzenia Firefoksa, aplikacje do iPhone czy Androida, systemy analizy danych Facebooka czy Tweetera, to musimy zrozumie─ç, ┼╝e podstawa tego ekosystemu jest dost─Öp do danych i API, kt├│re pozwala operowa─ç na nich.

M├│j projekt ma na celu zbudowanie interfejsu mi─Ödzy ┼Ťwiatem zakorzenionym w pi─Öknym, XXwiecznym modelu demokracji, a cyberspo┼éecze┼ästwem. My┼Ťl─Ö o nim jako o takim odpowiedniku OCRu pozwalaj─ůcego skorzysta─ç z warto┼Ťciowych danych zapisanych w przedpotopowych systemach.

W najwi─Ökszym skr├│cie projekt ma pozwoli─ç na pisanie wszelkiego rodzaju aplikacji operuj─ůcych na danych takich jak:

  • Kto jest dzi┼Ť premierem Polski?
  • Jaki jest adres email do marsza┼éka sejmu?
  • Ilu jest senator├│w PiS?
  • Kiedy odby┼éo si─Ö ostatnie posiedzenie sejmu?
  • Co zmieni┼éo si─Ö miedzy dwiema wersjami projektu ustawy?
  • Jak g┼éosowa┼é wybrany przeze mnie pose┼é przez ostatnie p├│┼é roku?
  • Kto prowadzi┼é ostatnie obrady sejmu?
  • Ilu doradc├│w ma Prezydent Polski i jakie s─ů ich emaile?

To tylko kilka pyta┼ä, na kt├│re odpowiedzi dzi┼Ť mo┼╝na znale┼║─ç, ale gdyby┼Ťmy chcieli napisa─ç aplikacj─Ö kt├│ra korzysta z tych danych, przetwarza je, lub prezentuje w wybranej przez siebie formie, musieliby┼Ťmy… no co┼╝… mozolnie regexpowa─ç si─Ö przez strony rz─ůdowe. OpenPolitics to w┼éa┼Ťnie robi za nas i udost─Öpnia w miar─Ö przejrzyste API do pobierania takich danych.

Ca┼éo┼Ť─ç jest otwarta, mo┼╝na postawi─ç sobie swoje instancje, udoskonala─ç, pom├│c mi w rozwoju i dopasowa─ç do swoich potrzeb.

Wraz z wydaniem wersji 0.1 OpenPolitics udost─Öpni┼éem instalacj─Ö tej wersji aplikacji pod adresem oraz do testowania i zabawy nowymi mo┼╝liwo┼Ťciami ­čÖé

Odpowiedzi na troch─Ö pyta┼ä umie┼Ťci┼éem w FAQ,

Co┼Ť jeszcze? Jutro, na TEDx Warsaw b─Öd─Ö mia┼é przyjemno┼Ť─ç m├│wi─ç troch─Ö wi─Öcej o styku technologii i polityki. Temat mojego wyst─ůpienia “Government Hackability”, zaczynam o 17:11, transmisja b─Ödzie chyba na ┼╝ywo ­čÖé

Mi┼éego hackowania, a je┼Ťli chcieliby┼Ťcie pom├│c, to zebra┼éem list─Ö JuniorJobs.

main po polsku tech

Otwieranie polityki metod─ů DYI

Jestem w takim wieku, ┼╝e mam jeszcze marzenia. Marzy mi si─Ö du┼╝o i szybko.

Jedn─ů z dziedzin w kt├│rej marze┼ä mam szczeg├│lnie du┼╝o jest bardzo zaniedbany styk infromatyki i polityki. Od blisko dziesi─Öciu lat bior─Ö udzia┼é, lub obserwuj─Ö rozw├│j, ogromnej liczby projekt├│w kt├│re stawiaj─ů sobie fantastyczne cele.

Wikipedia ze swoj─ů misj─ů kolekcjonowania wiedzy ┼Ťwiata, Ubuntu ze swoim pragnieniem stworzenia Linuksa dla ludzi, czy wreszcie najbli┼╝sza mi, Mozilla ze swoimi idea┼éami Otwartego Internetu. Wszyscy kt├│rzy pracuj─ů w tych i setkach innych projekt├│w wyrobili sobie pewne nawyki, pewne know-how. Budujemy pot─Ö┼╝ne warstwy narz─Ödzie pozwalaj─ůce nam realizowa─ç nasze cele w spcyficznych warunkach Internetu.

Z drugiej strony obserwuj─Ö, jak chyba ka┼╝dy, polityk─Ö, t─Ö kt├│ra coraz bardziej oddala si─Ö od “rzeczywisto┼Ťci” w kt├│rej ┼╝yj─Ö. Polityk─Ö m├│wi─ůc─ů j─Özykiem moich rodzic├│w, rozwi─ůzuj─ůc─ů problemy kt├│re coraz mniej mnie obchodz─ů, w spos├│b, kt├│ry wydaje mi si─Ö co najmniej nieefektywny.

W tym samym czasie niezwykle szybko rozwija si─Ö ca┼ée spo┼éecze┼ästwo, kt├│re ten ┼Ťwiat, w kt├│rym obraca si─Ö nasza polityka, po prostu ignoruje. Wyzwania, problemy, przeszkody i metody dzia┼éa┼ä jakie podejmujemy by rozwijac Wikipedi─Ö, by komunikowa─ç si─Ö efektywnie przez Facebooka, by wsp├│lnie tworzy─ç dokumenty w Etherpadzie czy tworzy─ç filmy amatorskie to ┼Ťwiat kt├│ry rozwija si─Ö i zmienia tak szybko, ┼╝e zaskakuje mnie jedynie jak bardzo z tej perspektywy przestrze┼ä dyskusji publicznej o polityce zaczyna wygl─ůda─ç jako┼Ť tak niepowa┼╝nie.

I teraz w tej w┼éa┼Ťnie rzeczywisto┼Ťci, gdy przestrze┼ä Internetu rozrasta si─Ö i staje si─Ö wa┼╝n─ů cz─Ö┼Ťci─ů ┼╝ycia ludzi, polityka zaczyna niezdarnie pr├│bowa─ç podej┼Ť─ç do tego, dziwnego dla nich tworu. Jaki┼Ť polityk zacznie blogowa─ç by napisa─ç co┼Ť na nim dwa razy na pr├│b─Ö i si─Ö zrazi─ç, inny za┼éo┼╝y tweetera, na kt├│rym opisze swoje ┼Ťniadanie. Powstaj─ů liczne komisje do spraw Internetu i Jego Przejaw├│w, a czasem nawet pa┼ästwo przerazi si─Ö anarchi─ů jaka tam panuje i postanowi ochroni─ç obywateli poprzez ustaw─Ö czego┼Ť zakazuj─ůc─ů, kompletnie nie rozumiej─ůc, ┼╝e Internet nauczy┼é si─Ö sam rozwi─ůzywa─ç swoje problemy i m┼éodzi ludzie, wychowani w poczuciu fal WiFi lataj─ůcych wok├│┼é nich od dziecka postrzegaj─ů takie ruchy jakby obserwowali s┼éonia w sk┼éadzie porcelany, niezdarnego, g┼éupiutkiego i zbyt powolnego by reagowa─ç i dopasowa─ç si─Ö.

Z drugiej za┼Ť strony mamy co┼Ť co mo┼╝na by nazwa─ç “Web Approach”. Swiat gdzie mo┼╝liwo┼Ťci technologiczne pojawiaj─ůce si─Ö co p├│┼é roku s─ů tak prze┼éomowe, ┼╝e wszelkie rozwi─ůzania wcze┼Ťniejsze trac─ů sens. Swiat gdzie wszystko sie da i jest tylko kwesti─ů czasu i zdolno┼Ťci w┼éo┼╝onych w oprogramowanie danego rozwi─ůzania. Gdzie projekty tworz─ů si─Ö samoczynnie by reagowa─ç na pojawiaj─ůce si─Ö wyzwania.

To ┼Ťwiat w kt├│rym transparentno┼Ť─ç jest wbudowana w DNA ekosystemu, w kt├│rym zmienno┼Ť─ç jest jedyn─ů mierzaln─ů sta┼é─ů, w kt├│rym prawo jest generowane lokalnie – per serwis – i dopasowywane w reakcji na zmiany w ci─ůgu dni, albo tygodni.

Dynamika tego ┼Ťwiata, jego fundamentalna odmienno┼Ť─ç wymaga przemiany pokoleniowej. Polityka nie jest na to gotowa i nak┼éada znane sobie mapy mentalne na zjawiska diametralnie odmienne od wszystkiego co znali wcze┼Ťniej.

Efektem jest to co mo┼╝emy obserwowa─ç w dziedzinach “konsultacji” z Internautami, oraz w dziedzinie aplikacji pisanych przez pa┼ästwo.┬á Spotkanie z Premierem by┼éo uroczym przyk┼éadem rozmowy w dw├│ch j─Özykach, propozycje Kancelarii Premiera, aby wybra─ç “przedstawicieli Internaut├│w” jest kolejnym, r├│┼╝ne wypowiedzi polityk├│w w stylu “te dane ujawnili┼Ťmy tylko w polskim Internecie” to pere┼éki.
Ostatnio na jakiej┼Ť konwencji kt├│ry┼Ť z ministr├│w chwali┼é si─Ö wnioskiem, ┼╝e pa┼ästwo musi si─Ö zelektronizowa─ç i korzysta─ç z maili… Nie musz─Ö chyba t┼éumaczy─ç jak absurdalne jest to w ┼Ťwietle powa┼╝nych dyskusji toczonych od d┼éu┼╝szego czasu o tym, ┼╝e email przestaje by─ç wa┼╝ny.

Wszystko to jest w najlepszym razie przestarza┼ée, w najgorszym za┼Ť jest egzemplifikacj─ů opisanej powy┼╝ej przepa┼Ťci technologiczno-spo┼éecznej.

Z drugiej strony Pa┼ästwo jest i pozostanie wa┼╝ne. I mo┼╝e, a w┼éa┼Ťciwie powinno by─ç, pot─Ö┼╝nym narz─Ödziem jaki obywatele posiadaj─ů. Technologia mo┼╝e za┼Ť da─ç nam pot─Ö┼╝n─ů bro┼ä, umo┼╝liwi─ç kontrolowanie rz─ůdz─ůcych, a z drugiej strony zdynamizowa─ç procesy poprzez zbudowanie platformy dialogu i wsp├│┼épracy w miejscu, w kt├│rym obywatele ju┼╝ s─ů – w Internecie, przy u┼╝yciu mechanizm├│w kt├│re ju┼╝ znaj─ů.

To nie jest kwesita “czy” tylko “kiedy”. Firmy takie jak Dell czy Google nie maj─ů problemu z uzyskaniem feedbacku od ca┼éego ┼Ťwiata i nie potrzebuj─ů w tym celu spotyka─ç si─Ö z XX-wiecznym archaizmem – “przedstawicielami Internaut├│w”. Swiat polityki czeka w najbli┼╝szych latach bolesna i d┼éuga nauka nowej rzeczywisto┼Ťci, w kt├│rej wyborcy ┼╝yj─ů.

A w mi─Ödzyczasie my nadal b─Ödziemy rozwi─ůzywa─ç sami swoje problemy… oddolnie, organicznie, ewolucyjnie…┬á problemy takie jak… jak napisa─ç rozszerzenie do Firefoksa kt├│re powie mi kto jest dzi┼Ť premierem Polski, albo sk─ůd wzi─ů─ç list─Ö emaili do pos┼é├│w PO?

Na te i inne pytania odpowie nast─Öpny wpis ­čÖé