A few days ago we just got yet another reason to leave centralized, a-social networks behind. You probably do not want crazy billionaires serial innovators to reinnovate your virtual neighborhood without mercy. Aside of that there are the secret timeline algorithms, of which is little known beside that they primarily amplify hatred and biased, extremist opinions for the sake of maximizing impressions, users’ interactions, and platform revenue. At the same time one’s own personal timeline is being polluted and tampered with all kinds of paid advertisement whilst the underlying personal data of each and every user is generously sold in all directions.
But wasn’t this all suppose to be just about “… connecting with friends and the world around you”?
Exactly. But for that we have the Fediverse and when it comes to microblogging there is Mastodon. As you probably have already heard about those, let’s directly dive into organizing your Twitter exodus step by step.
Last week GitHub and its parent company Microsoft announced“GitHub Copilot – their/your new AI pair programmer”. E.g. The New Stack, The Verge or CNBC have reported extensively about it. And there is a lot of buzz around this new service, especially within the Open Source and Free Software world. Not only by its developers, but also among its supporting lawyers and legal experts, although the actual news is not that ground breaking, because it is not the first of its kind. Similar ML-/AI-based offers like Tabnine, Kite, CodeGuru, and IntelliCode are already out there, which have also been trained with public code.
Copilot currently is in “technical preview” and planned to be offered as commercial version according to GitHub.
The core of it appears to be OpenAI Codex, a descendant of the famous GPT-3 for natural language processing. According to its homepage it “[…] has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub”.Update 2021/07/08: GitHub Support appears to have confirmed that all public code at GitHub was used as training data.
Great, in what amazing times we are living in! Sounds like with Copilot you do not need your human co-programmers any longer, who assisted you during the good old times in form of pair-programming or code review. Lucky you and especially your employer. On top you will save precious time because it will help you to directly fix a bug, write typical functions or even “[…] learn how to use a new framework without spending most of your time spelunking through the docs or searching the web”. Not to forget about copying & pasting useful code fragments from Stackoverflow or other publicly available sources like GitHub.
At the same time, two essential questions arise, in case you care a bit about authorship:
Did the training of the AI infringe any copyright of the original authors who actually wrote the code that was used as training data?
Will you violate any copyright by including Copilot’s code suggestions in your source code?
Let’s not talk about another aspect that GitHub mentions in their FAQs – personal data: “[…] In some cases, the model will suggest what appears to be personal data – email addresses, phone numbers, access keys, etc. […]”
The results of the Open Source Impact Study tasked by the European Commission have been widely discussed mainly because of its numbers. Though being announced just now, the study identified for the year 2018 a contribution of 0.4% to the GDP worth EUR 63 billion by FOSS, if measured by the increase in commits. 10% more contributors would even raise the GDP of the European Union by 0.6% (EUR 95 billion). The overall cost-benefit ratio is estimated with at least 1:4.
But it gets even more interesting, when looking into the results of the accompanying survey covering about 900 stakeholders (mainly companies) from all around Europe.
For them, incentives for using and investing in Open Source have been, sorted by relevance:
finding technical solutions
avoiding vendor lock-in
carrying forward the state of the art of technology
As benefits they have seen:
support of open standards and interoperability
access to source code
independence from proprietary providers of software
Within the participants the cost-benefit ratio has been estimated even with 1:10.
The current circumstances also forced conferences (those gatherings with really large audiences) completely into cyberspace. Some sticked with traditional approaches to stream talks via off-the-shelf videoconferencing applications and built upon the integrated very limited interaction features offered by these poor proprietary tools. Others have gone complete new ways and brought fascinating and well working concepts on how to still successfully connect the crowds to enable lively conversations and facilitate the exchange of knowledge and experiences in a distant environment.
Let’s start with rc3 and its virtual conference venue in form of rc3 world, implemented with Work Adventure. In a pixel-2D-adventure-style you could walk around the area and as soon as you are approaching other characters, a live audio and video stream with those humans or other live forms controlling the character would open. Limited to 4-5 persons at a time, it allowed you to talk directly with each other – face to face. Due to the limitation of participants you were still able to have a working conversation.
Somehow you needed to get used to having an unexpected and sudden interaction with one and another – on live video, but still it brought back the heavily missed opportunity to get in personal touch with other participants who are sharing possibly similar interests.
The FOSDEM 2021, the worlds biggest conference on Free and Open Source Software usually taking place in Bruxelles, had for me a very convincing overall concept. The organizers and infrastructure artists have done a tremendous job that allowed for the most impressive conference experience so far and for long. Naturally and purely based on Free Software, at its core matrix, element, and Jitsi.
How did it work and what was so great about it?
Presentations of specific areas of interest had been summarized in virtual rooms with a fixed agenda, like in most physical conferences. Participants logged into a chat infrastructure which represented the rooms by group conversations. You would simply join the room(s) that you are interested in and could start texting with each other and the speakers like on IRC. Talks had been recorded beforehand and where automatically started – by the computer (systemd) – at their scheduled time. Its audio and video were streamed right above your chat window. When the talk ended, the Q&As were streamed live for a fixed amount of time within that room until the next talk started auto-playing according to schedule. During that first part of the Q&A session of a talk, moderators where clarifying upvoted questions and comments from the chat and interacting realtime with the presenters. Those interested could then continue discussing with the speakers and further extend their conversation by switching to a separate room. So per talk you had a dedicated room for the second part of the Q&A that would open shortly after and even allowed anyone there to interact live via audio and video.
In sum that meant that you could check the schedule for topics you are interested in, connect at the announced time and be sure to really listen to that talk instead of watching tech staff doing mic checks or heavily delayed earlier talks whilst being unsure about if and when the one you came for would actually start.
In addition the highly valued Q&A and following backstage (and off the record) conversations could still take place without interrupting or being interrupted by the subsequent talk.
Just impressive and so useful! Thanks a lot to all who made this happen and work that well! These concepts are now here to stay, even when conferences will hopefully resume soon back in the physical world.
A few days ago the oral hearing of the lawsuit between Oracle and Google were held at the U.S. Supreme Court, after it had been delayed by COVID-19. McCoy Smith shares his observations and interpretation in a detailed post “Oracle/Google” at Lex Pan Law. The litigation is over the copyrightability and if so infringement of certain parts of Java (mainly APIs) that were used within Android. If Oracle wins it will have significant impact on the whole software world and especially Open Source. Ultimately any API (use) would become subject to copyright.
I started my digital photography life with a Nikon D80 and Lightroom 1.0 quite a while ago (2007). When Adobe stopped selling copies and only provided subscription options was one of the moments it became very clear that an alternative is needed. Let’s not talk about Lightroom CC, its unstable desktop app, and a recent user nightmare.
To be independent from the business needs of a company, the only option is to go for an alternative that is licensed under an Open Source license. With that preference in mind and if it is about RAW processing, you have the choice between digiKam, RawTherapee, and darktable.
I was following darktable since a few years. The 2.x versions have not really been working for me. In contrast the releases of 3.0 and 3.2 have been milestones in growing darktable into a serious and easy to use – not to say even more mature – alternative to Lightroom and it is time to do the final switch. Now or never.
To share it upfront: I did not get disappointed nor frustrated by this decision. I am just wondering, why the hell did I not switch earlier?
It has been instantiated for the sole purpose of trademark management (and enforcement?) for Open Source projects, who are said to be not well positioned to care by themselves. For a start Google assimilated their own projects: Angular, Istio, and GerritCode Review. Own Projects? Oh well, at least for Istio – that was co-developed with IBM – they now clarified who has ownership of its trademark.
In their introduction statement they claim: “[…] Accordingly, a trademark, while managed separately from the code, actually helps project owners ensure their work is used in ways that follow the Open Source Definition by being a clear signal to users that, “This is open source.” […]”
Josh Simmons, the president of the Open Source Initiative (OSI) maintaining the referenced definition has a diplomatic statement to that, which also serves well as a summary: “Of course, OSI is always glad when folks explicitly work to maintain compatibility with the Open Source Definition. What that means here is something we’re still figuring out, so OSI is taking a wait-and-see approach.” 
Or is this yet another project for the Google Cemetery because the Open Source community is not that into trademarks as cooperations are?
There are more detailed summaries and discussions: