As April is reaching its ending, the weather in Brussels is finally turning more and more enjoyable. The unpredicatble madness of April weather conditions (we refer to it as the month of “April fool’s weather” in Czech) is drawing to its end. The alleyways all across the city are colouring themselves in lush hues of green (and pink, in case of the cherry trees). All kinds of flying bugs are coming back to life (much to Alicia’s dismay). The warmth of the sun can be enjoyed more frequently, and more than in just random, unpredicatble spells. If someone asked me, I would say that beginning of May is when tourist visit to Brussel start being the most rewarding. The outdoor patios of many cafés and restaurant open up, the festivals start regularly taking place. The gloomy greyness is in the past (for the time being), and the eclectic cultural life of the city comes more to the surface. // As the conditions improved, I have finally found a good opportunity to join the BXL RUN running community for the first casual 5k run around Josaphat. I believe that as we witness the online spaces and platforms becoming more attention-draining, polarised and influenced by the invisible hands of bot armies, the importance of in-person friendships and spaces will only keep growing. Organically bumping into new people and making new social connections after moving cities (or even countries) is always a challenge, and gatherings around mutual shared interests or activities are hard to beat.

Recently, I have stumbled upon Graham Weaver’s ”How to Live an Asymmetric Life” lecture given last year to graduates at Stanford Business School. While there’s no groundbreaking novelty in the ideas that he was sharing, I have quite enjoyed the talk. In a nutshell, he advocates for “an asymmetric life” where one (a) takes challenges and does hard things, (b) persists for longer periods of time, (c) lives according to one’s own interests and (d) writes stories about what should happen. In particular, I believe that the concepts of risk:reward (R:R) and placing asymmetric bets really deserve to be taught more broadly, and to become more widely understood outside of the world of financial markets and trading (where these concepts constitute the true “bread and butter” fundamentals). My experience is as follows: whatever action one considers undertaking, I frequently find it valuable to perform a quick risks & rewards analysis in my mind (especially in situation when I feel an inner resistance to taking the action). Surprisingly, by directly and specifically evaluating the positive outcomes, and also realizing the actual potential losses (“what could go wrong”), one frequently realizes that on many occasions, the losses are negligible while the possible benefits are (very) favourable. Basically, the fear of unfavourable outcomes seems to be in most cases worse than the unfavourable outcomes themselves. // Besides that, the talk also touche upon the simple fact that “good things take time (sometimes even decades)” - honestly, hearing this always helps, particularly in the current hyper-speed research environment domain around AI, where game-changing discoveries and breakthroughs field nowadays seem to occur on what is essentially a weekly basis.

I am also by no means an expert on the AI sector, and my perspective on it comes from a mixture of personal curiosity and the need to stay aware of the field’s status for my day-to-day work. One of the issues which are becoming painfully obvious (and which we are hoping to alleviate with our work on integrated photonics within HPE) is on the energy and resource requirements on the hardware accelerator side. One recent estimate brings the cost of single (!!!) training procedure of a state-of-the-art model by Google to $191 million. More and more big tech companies are entering the efficient AI-tailored computing hardware field, and I am very curious on how will this space unfold (and how long will NVIDIA’s supreme reign last). // One recuring theme that’s somewhat further from the technical side of AI is related to the topic of the content: more specifially, both the content used to train the AI models, and the content generated by these. Regarding the first one, my friend Martin has recently pointed out an interesting thing: generative music models like Suno v3 and UDIO are impressively capable, and were therefore in very high likelihood trained on professional songs. Therefore, that constitutes either a likely breach of copyright via scrapping of music from online sources, or a (secret) deal was struck between rights owners (labels) and AI companies. In either case, things don’t seem to be looking rosy for the musicians themselves. / On genAI produced content: Some people seem to fear an artificial general intelligence (AGI) becoming a supreme overlord over mankind; others rightfully point out that there’s a number of common jobs that are now entering a landscape with non-negligible risk of being replaced by AI. In my opinion, the most acutely dangerous risk is also the most “boring” one that’s hiding in a plain sight: a scenario where the open internet and our social networks and communication platforms drown in infinitely-scalable bullshit of AI generated content (kinda like the dead internet theory). In the better case, this is just spam - in the less fortunate case, those are full-scale opinion-influencingoperations deployed at scale to shape public opinion, to divide or conquer. My feeling is that this will perhaps lead towards people creating and participating in more gatekept online communities. An interesting side question also arises for when AI starts to be trained on datasets containing non-negligible part of AI generated content - since LLMs operate in a manner somewhat similar to compression algorithms, can such “data taint” lead to performance bottlenecks of future models due to loss of information fidelity? I feel this might become an interesting question in not too distant future.

Among the more personal things, I have finally managed to go back to creating music during past few months, and currently have a new song reaching the final stages of production. I’m particularly happy about this, because I have not managed to create any new music during 2023, and “getting back to music production” was on my wishlist of things-to-do in 2024. The new instrumental song is called Arcadia, and aims to channel a somewhat moody (slightly “dystopian”) vibe on top of strong, rhytmic beat. The beat itself in particular is probably among my favourites among all the songs that I have created up to date. I am currently finalizing some last details of the composition, and pushing it through few other producer friends to get some feedback.

That’s all for now - ’till next time.