<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="it"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://mattnot.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://mattnot.github.io/" rel="alternate" type="text/html" hreflang="it" /><updated>2025-12-30T11:56:54+00:00</updated><id>https://mattnot.github.io/feed.xml</id><title type="html">Matteo Notaro</title><subtitle>Matteo Notaro&apos;s personal website. Here you can find my blog, my projects and my resume.
</subtitle><author><name>Matteo Notaro</name><email>matteonotaro@outlook.it</email></author><entry><title type="html">Closing the gap</title><link href="https://mattnot.github.io/2025/02/02/closing-the-gap.html" rel="alternate" type="text/html" title="Closing the gap" /><published>2025-02-02T00:00:00+00:00</published><updated>2025-02-02T00:00:00+00:00</updated><id>https://mattnot.github.io/2025/02/02/closing-the-gap</id><content type="html" xml:base="https://mattnot.github.io/2025/02/02/closing-the-gap.html"><![CDATA[<h1 id="the-gap">The gap</h1>

<p>As a “baby” computer scientist (or software engineer) or someone about to choose this career, you might feel overwhelmed by the sheer firepower of these new LLMs.</p>

<p>The real gap, however, lies in experience, not in firepower.</p>

<p>These LLMs have seen millions, if not billions, of lines of code—probably at least three orders of magnitude more than the amount of code you will encounter in the next 40 years of your career.</p>

<p>Surely, they must have better solutions than anything you could ever come up with, right? Not quite. Let me explain.</p>

<h2 id="professional-intuition-vs-algorithmic-responses">Professional Intuition vs Algorithmic Responses</h2>

<p>When we talk about professional intuition versus algorithmic responses, we’re addressing a fundamental difference in how humans and LLMs approach programming challenges. Let me break this down in detail.</p>

<h2 id="professional-intuition-in-software-development">Professional Intuition in Software Development</h2>

<p>Professional intuition in software development is like a sixth sense that developers cultivate over years of hands-on experience. Imagine a senior developer who, within minutes of looking at a bug report, can narrow down the likely cause not because they’ve memorized every line of code, but because they’ve developed a deep understanding of how systems typically fail. This intuition comes from a rich tapestry of experiences: the late-night debugging sessions, the production incidents that required quick thinking, and the countless conversations with users and stakeholders.</p>

<p>Consider this real-world scenario: A system suddenly starts experiencing intermittent performance issues. An LLM, when presented with the error logs and code snippets, might suggest various optimization techniques based on pattern matching from its training data. However, a seasoned developer might immediately suspect an interaction with a recent business event—perhaps a marketing campaign that changed user behavior patterns—because they understand the broader context in which the code operates.</p>

<h2 id="key-aspects-of-professional-intuition">Key Aspects of Professional Intuition</h2>

<p>This intuition manifests in several key ways:</p>

<h3 id="understanding-system-behavior-under-stress">Understanding System Behavior Under Stress</h3>
<p>Through experience, developers develop an almost instinctive sense of how systems behave under different conditions. They can often predict cascade failures before they happen, not because they’ve calculated every possibility, but because they’ve developed a deep understanding of system dependencies and potential failure points.</p>

<h3 id="context-aware-problem-solving">Context-Aware Problem Solving</h3>
<p>When developers debug issues, they don’t just look at the code in isolation. They consider factors like recent deployments, user behavior patterns, business events, and even the time of year (think holiday season traffic spikes). This holistic view is something that LLMs, despite their vast knowledge, cannot replicate because they lack real-world operational context.</p>

<h3 id="risk-assessment">Risk Assessment</h3>
<p>Perhaps most importantly, developers have an innate sense of risk that comes from real consequences. They understand that every line of code they write could potentially affect real users, business operations, and system stability. This understanding isn’t just theoretical—it’s deeply personal and comes from experiencing both successes and failures.</p>

<h2 id="a-concrete-example">A Concrete Example</h2>

<p>Imagine developing a payment processing system. An LLM might suggest perfectly valid code for handling transactions, complete with error handling and logging. However, a developer with experience in financial systems would instinctively add additional safeguards against double-charging, implement idempotency patterns, and ensure proper reconciliation mechanisms—not because these were explicitly requested, but because experience has taught them these are critical in financial systems.</p>

<h2 id="the-role-of-llms-in-development">The Role of LLMs in Development</h2>

<p>This doesn’t mean LLMs aren’t valuable—quite the opposite. When we understand that LLMs are pattern-matching tools rather than intuitive problem solvers, we can better leverage their strengths while relying on our intuition for the crucial decisions that require deep understanding and context. The key is recognizing that our intuition isn’t threatened by LLMs but rather becomes more valuable as it helps us better utilize these tools.</p>

<h2 id="strengthening-professional-intuition">Strengthening Professional Intuition</h2>

<p>The development of this professional intuition is ongoing and actually accelerates when working with LLMs. While the AI can handle routine coding tasks, we can focus on developing our understanding of system architectures, business domains, and user needs—areas where human intuition remains irreplaceable.</p>]]></content><author><name>Matteo Notaro</name><email>matteonotaro@outlook.it</email></author><category term="ai" /><summary type="html"><![CDATA[The gap As a “baby” computer scientist (or software engineer) or someone about to choose this career, you might feel overwhelmed by the sheer firepower of these new LLMs. The real gap, however, lies in experience, not in firepower. These LLMs have seen millions, if not billions, of lines of code—probably at least three orders of magnitude more than the amount of code you will encounter in the next 40 years of your career. Surely, they must have better solutions than anything you could ever come up with, right? Not quite. Let me explain. Professional Intuition vs Algorithmic Responses When we talk about professional intuition versus algorithmic responses, we’re addressing a fundamental difference in how humans and LLMs approach programming challenges. Let me break this down in detail. Professional Intuition in Software Development Professional intuition in software development is like a sixth sense that developers cultivate over years of hands-on experience. Imagine a senior developer who, within minutes of looking at a bug report, can narrow down the likely cause not because they’ve memorized every line of code, but because they’ve developed a deep understanding of how systems typically fail. This intuition comes from a rich tapestry of experiences: the late-night debugging sessions, the production incidents that required quick thinking, and the countless conversations with users and stakeholders. Consider this real-world scenario: A system suddenly starts experiencing intermittent performance issues. An LLM, when presented with the error logs and code snippets, might suggest various optimization techniques based on pattern matching from its training data. However, a seasoned developer might immediately suspect an interaction with a recent business event—perhaps a marketing campaign that changed user behavior patterns—because they understand the broader context in which the code operates. Key Aspects of Professional Intuition This intuition manifests in several key ways: Understanding System Behavior Under Stress Through experience, developers develop an almost instinctive sense of how systems behave under different conditions. They can often predict cascade failures before they happen, not because they’ve calculated every possibility, but because they’ve developed a deep understanding of system dependencies and potential failure points. Context-Aware Problem Solving When developers debug issues, they don’t just look at the code in isolation. They consider factors like recent deployments, user behavior patterns, business events, and even the time of year (think holiday season traffic spikes). This holistic view is something that LLMs, despite their vast knowledge, cannot replicate because they lack real-world operational context. Risk Assessment Perhaps most importantly, developers have an innate sense of risk that comes from real consequences. They understand that every line of code they write could potentially affect real users, business operations, and system stability. This understanding isn’t just theoretical—it’s deeply personal and comes from experiencing both successes and failures. A Concrete Example Imagine developing a payment processing system. An LLM might suggest perfectly valid code for handling transactions, complete with error handling and logging. However, a developer with experience in financial systems would instinctively add additional safeguards against double-charging, implement idempotency patterns, and ensure proper reconciliation mechanisms—not because these were explicitly requested, but because experience has taught them these are critical in financial systems. The Role of LLMs in Development This doesn’t mean LLMs aren’t valuable—quite the opposite. When we understand that LLMs are pattern-matching tools rather than intuitive problem solvers, we can better leverage their strengths while relying on our intuition for the crucial decisions that require deep understanding and context. The key is recognizing that our intuition isn’t threatened by LLMs but rather becomes more valuable as it helps us better utilize these tools. Strengthening Professional Intuition The development of this professional intuition is ongoing and actually accelerates when working with LLMs. While the AI can handle routine coding tasks, we can focus on developing our understanding of system architectures, business domains, and user needs—areas where human intuition remains irreplaceable.]]></summary></entry><entry><title type="html">LLM and AGI, is the right direction?</title><link href="https://mattnot.github.io/2024/12/08/llm-agi.html" rel="alternate" type="text/html" title="LLM and AGI, is the right direction?" /><published>2024-12-08T00:00:00+00:00</published><updated>2024-12-08T00:00:00+00:00</updated><id>https://mattnot.github.io/2024/12/08/llm-agi</id><content type="html" xml:base="https://mattnot.github.io/2024/12/08/llm-agi.html"><![CDATA[<h1 id="llm-and-agi"><strong>LLM and AGI</strong></h1>
<p>Nowadays, there’s a lot of talk about LLMs (large language models), often in connection with the concept of AGI (Artificial General Intelligence). AGI represents the challenge of creating a general-purpose AI capable of doing everything as well as humans—or even better, in the case of ASI (Artificial Super Intelligence).<br />
Notable experiments have already emerged, one of the most famous being <a href="https://x.com/babyagi_?s=21">Baby AGI</a>.</p>

<p>The question I’d like to raise is the following:</p>
<h2 id="are-we-sure-a-language-model-is-what-will-lead-us-to-this-goal"><strong>Are we sure a language model is what will lead us to this goal?</strong></h2>

<h3 id="they-predict-they-dont-think">They predict, they don’t think.</h3>
<p>I ask because many overlook a fundamental and self-evident concept: <strong>these models don’t think.</strong><br />
What these models do is one of the most classic tasks in AI: they predict a value. They analyze the “past” and, after a seemingly infinite series of matrix multiplications, output a bunch of values between 0 and 1, picking the token with the highest probability.</p>

<p>They do it well, they do it efficiently. But they don’t reason.</p>

<p>For instance, they know that 2+2=4 simply because, countless times in the training dataset, the characters “2,” “+,” “2,” and “=” were always followed by the character “4.” For fans of alternative rock, there’s also <a href="https://youtu.be/2w6kHS_IRrE?si=4P46sbNn2XGCrRai">“5”</a>.<br />
And what if the model’s creators were big fans of alternative rock? We might have ended up with a ChatGPT where 2+2=5.
In fact, early versions of ChatGPT didn’t know how to handle math at all. And even now, it’s not guaranteed that it truly understands math, it’s highly likely that an external logic kicks in whenever math is required.</p>

<p>Some might argue that reasoning is nothing more than predicting the next thing to say or think.<br />
I respond by saying that art and even problem-solving wouldn’t exist if the very concept of reasoning were so simplistic.<br />
Art —in all its forms, <a href="https://it.m.wikipedia.org/wiki/The_Art_of_Computer_Programming">including programming</a> — is a generative process. It doesn’t predict the “next thing to do” based on something already known (e.g., 2+2=), but it <strong>creates</strong> the next action, attempting not to resemble something already familiar.<br />
A language model doesn’t do this; it cannot invent something new. Everything that comes out of its final layer is just a number associated with a token it already knows. It’s a closed world.</p>

<p>Moreover, LLMs have a very basic limitation: they know what they’re about to say and what they’ve already said, but not what they will say in the next few words. This is why models like GPT sometimes repeat themselves. They lack what, in the world of Regular Expressions, is called a “positive lookahead.”</p>

<p>Accepting that an LLM could be AGI means admitting that we have nothing left to discover and that if something remains unknown, it’s only because human inductive reasoning hasn’t yet reached that conclusion.<br />
I really like this image to describe what I mean:</p>

<p><img src="https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fi.pinimg.com%2Foriginals%2Fda%2F53%2F8c%2Fda538c469dcfcb3da2633665689fc420.png&amp;f=1&amp;nofb=1&amp;ipt=bbaf4433cd8fec5fa0678e6531e82ad334212dc96779f73a60dc4b37198d0992&amp;ipo=images" alt="Knowledge" /></p>

<p>An LLM stops at knowledge. A human has the power of intuition and even <strong>error.</strong> Without error, there would be no penicillin; without intuition, there would be nothing.</p>]]></content><author><name>Matteo Notaro</name><email>matteonotaro@outlook.it</email></author><category term="ai" /><summary type="html"><![CDATA[LLM and AGI Nowadays, there’s a lot of talk about LLMs (large language models), often in connection with the concept of AGI (Artificial General Intelligence). AGI represents the challenge of creating a general-purpose AI capable of doing everything as well as humans—or even better, in the case of ASI (Artificial Super Intelligence). Notable experiments have already emerged, one of the most famous being Baby AGI. The question I’d like to raise is the following: Are we sure a language model is what will lead us to this goal? They predict, they don’t think. I ask because many overlook a fundamental and self-evident concept: these models don’t think. What these models do is one of the most classic tasks in AI: they predict a value. They analyze the “past” and, after a seemingly infinite series of matrix multiplications, output a bunch of values between 0 and 1, picking the token with the highest probability. They do it well, they do it efficiently. But they don’t reason. For instance, they know that 2+2=4 simply because, countless times in the training dataset, the characters “2,” “+,” “2,” and “=” were always followed by the character “4.” For fans of alternative rock, there’s also “5”. And what if the model’s creators were big fans of alternative rock? We might have ended up with a ChatGPT where 2+2=5. In fact, early versions of ChatGPT didn’t know how to handle math at all. And even now, it’s not guaranteed that it truly understands math, it’s highly likely that an external logic kicks in whenever math is required. Some might argue that reasoning is nothing more than predicting the next thing to say or think. I respond by saying that art and even problem-solving wouldn’t exist if the very concept of reasoning were so simplistic. Art —in all its forms, including programming — is a generative process. It doesn’t predict the “next thing to do” based on something already known (e.g., 2+2=), but it creates the next action, attempting not to resemble something already familiar. A language model doesn’t do this; it cannot invent something new. Everything that comes out of its final layer is just a number associated with a token it already knows. It’s a closed world. Moreover, LLMs have a very basic limitation: they know what they’re about to say and what they’ve already said, but not what they will say in the next few words. This is why models like GPT sometimes repeat themselves. They lack what, in the world of Regular Expressions, is called a “positive lookahead.” Accepting that an LLM could be AGI means admitting that we have nothing left to discover and that if something remains unknown, it’s only because human inductive reasoning hasn’t yet reached that conclusion. I really like this image to describe what I mean: An LLM stops at knowledge. A human has the power of intuition and even error. Without error, there would be no penicillin; without intuition, there would be nothing.]]></summary></entry><entry><title type="html">Who Am I.</title><link href="https://mattnot.github.io/2024/10/20/who-am-i.html" rel="alternate" type="text/html" title="Who Am I." /><published>2024-10-20T00:00:00+00:00</published><updated>2024-10-20T00:00:00+00:00</updated><id>https://mattnot.github.io/2024/10/20/who-am-i</id><content type="html" xml:base="https://mattnot.github.io/2024/10/20/who-am-i.html"><![CDATA[<h1 id="who-am-i"><strong>Who Am I?</strong></h1>
<p>Hi there! I’m Matteo Notaro (he/him), an Italian tech enthusiast with what might seem like an ordinary passion: everything related to technology. I love programming, problem-solving, and basically anything that rhymes with “tech.” Oh, and I’m a huge fan of 🍕 and ☕ – but I guess that’s pretty standard for someone living in the Bel Paese.</p>

<p>Jokes aside, my journey in the tech world began quite conventionally: I earned my bachelor’s degree in Computer Science from the University of Calabria in 2020, right in the middle of a global pandemic. It was a challenging time, for sure, but also incredibly stimulating.</p>

<hr />

<h1 id="my-educational-journey"><strong>My Educational Journey</strong></h1>
<p>During my bachelor’s studies, I was lucky enough to participate in an Erasmus+ project in Klagenfurt am Wörthersee, Austria. While there, I began working on my thesis, developing a parser for a programming language based on Answer Set Programming. Not satisfied with the result, I decided to push further: I turned that project into an IDE-as-a-Service. The result? A degree with top honors (110/110 cum laude) and an experience I’m still proud of today.</p>

<p>After earning my bachelor’s, I pursued a master’s degree in Artificial Intelligence and Data Science. However, I eventually decided to shift gears and completed my master’s in Computer Engineering at UniMarconi.</p>

<hr />

<h1 id="my-professional-experiences"><strong>My Professional Experiences</strong></h1>
<p>I started working during high school, taking on small web development projects. Nothing groundbreaking – mostly WordPress here and there – but these projects helped me take my first steps in the field.</p>

<p>In 2020, one of my professors offered me a position in a spin-off company of the University of Calabria. After a few months working part-time, I seized the opportunity to work full-time as a developer. That’s where my career truly began, primarily as a full-stack developer working with Angular, Spring Boot, and SQL databases.</p>

<p>The real turning point came when I discovered the world of Big Data. Apache Spark, Scala, and NoSQL databases opened up a fascinating universe that motivated me to dig deeper. Along the way, I also explored DevOps tools like Docker, Kubernetes, and Jenkins – because why not keep things exciting?</p>

<p>One project I’m particularly proud of is <a href="https://datagan.io/"><em>DataGan</em></a>, a system for generating synthetic data based on production data. The idea was mine, and it grew into a full-fledged company project.</p>

<p>In my most recent role, I held the title of Head of R&amp;D. Unfortunately, the reality didn’t match the promise of the title, and I realized it wasn’t the right place for my professional growth.<br />
After realizing that, I changed company, drawn by the promise of exciting projects in the AI field. However, once in the new one, I found myself working on uninspiring web development tasks for products with little technological appeal and poorly managed workflows. For 10 out of the 11 months I spent there, I felt underutilized and unchallenged, struggling to find any real engagement in my work.</p>

<p>Since November 2024, I’ve been working at Agile Lab, where I hope to continue building innovative projects in AI and data.</p>

<hr />

<h1 id="what-drives-me"><strong>What Drives Me</strong></h1>
<p>If I had to summarize what motivates me in one word, it would be <em>creation</em>. Creating solutions to complex problems, building useful tools, and, why not, stirring up some creative chaos when needed. I love diving into challenging new technologies, learning, and constantly improving.</p>

<hr />

<h1 id="why-am-i-writing-this-blog"><strong>Why Am I Writing This Blog?</strong></h1>
<p>This blog is my space to share experiences, ideas, and projects. Whether you’re here for inspiration, curiosity, or just to laugh at a few tech anecdotes, I’m glad you stopped by.</p>

<p>If you’d like to share thoughts, collaborate on a project, or just chat about technology (or pizza and coffee), feel free to reach out!</p>

<p><strong>Thanks for reading, and happy hacking!</strong> 😊</p>]]></content><author><name>Matteo Notaro</name><email>matteonotaro@outlook.it</email></author><category term="me" /><summary type="html"><![CDATA[Who Am I? Hi there! I’m Matteo Notaro (he/him), an Italian tech enthusiast with what might seem like an ordinary passion: everything related to technology. I love programming, problem-solving, and basically anything that rhymes with “tech.” Oh, and I’m a huge fan of 🍕 and ☕ – but I guess that’s pretty standard for someone living in the Bel Paese. Jokes aside, my journey in the tech world began quite conventionally: I earned my bachelor’s degree in Computer Science from the University of Calabria in 2020, right in the middle of a global pandemic. It was a challenging time, for sure, but also incredibly stimulating. My Educational Journey During my bachelor’s studies, I was lucky enough to participate in an Erasmus+ project in Klagenfurt am Wörthersee, Austria. While there, I began working on my thesis, developing a parser for a programming language based on Answer Set Programming. Not satisfied with the result, I decided to push further: I turned that project into an IDE-as-a-Service. The result? A degree with top honors (110/110 cum laude) and an experience I’m still proud of today. After earning my bachelor’s, I pursued a master’s degree in Artificial Intelligence and Data Science. However, I eventually decided to shift gears and completed my master’s in Computer Engineering at UniMarconi. My Professional Experiences I started working during high school, taking on small web development projects. Nothing groundbreaking – mostly WordPress here and there – but these projects helped me take my first steps in the field. In 2020, one of my professors offered me a position in a spin-off company of the University of Calabria. After a few months working part-time, I seized the opportunity to work full-time as a developer. That’s where my career truly began, primarily as a full-stack developer working with Angular, Spring Boot, and SQL databases. The real turning point came when I discovered the world of Big Data. Apache Spark, Scala, and NoSQL databases opened up a fascinating universe that motivated me to dig deeper. Along the way, I also explored DevOps tools like Docker, Kubernetes, and Jenkins – because why not keep things exciting? One project I’m particularly proud of is DataGan, a system for generating synthetic data based on production data. The idea was mine, and it grew into a full-fledged company project. In my most recent role, I held the title of Head of R&amp;D. Unfortunately, the reality didn’t match the promise of the title, and I realized it wasn’t the right place for my professional growth. After realizing that, I changed company, drawn by the promise of exciting projects in the AI field. However, once in the new one, I found myself working on uninspiring web development tasks for products with little technological appeal and poorly managed workflows. For 10 out of the 11 months I spent there, I felt underutilized and unchallenged, struggling to find any real engagement in my work. Since November 2024, I’ve been working at Agile Lab, where I hope to continue building innovative projects in AI and data. What Drives Me If I had to summarize what motivates me in one word, it would be creation. Creating solutions to complex problems, building useful tools, and, why not, stirring up some creative chaos when needed. I love diving into challenging new technologies, learning, and constantly improving. Why Am I Writing This Blog? This blog is my space to share experiences, ideas, and projects. Whether you’re here for inspiration, curiosity, or just to laugh at a few tech anecdotes, I’m glad you stopped by. If you’d like to share thoughts, collaborate on a project, or just chat about technology (or pizza and coffee), feel free to reach out! Thanks for reading, and happy hacking! 😊]]></summary></entry><entry><title type="html">Projects</title><link href="https://mattnot.github.io/2021/10/23/projects.html" rel="alternate" type="text/html" title="Projects" /><published>2021-10-23T00:00:00+00:00</published><updated>2021-10-23T00:00:00+00:00</updated><id>https://mattnot.github.io/2021/10/23/projects</id><content type="html" xml:base="https://mattnot.github.io/2021/10/23/projects.html"><![CDATA[<h1 id="public-projects-where-ive-been-involved">Public projects where I’ve been involved</h1>

<ul>
  <li><a href="https://datagan.io/">Datagan</a>: tool for generating synthetic data. Not in my possession right now.</li>
  <li><a href="https://github.com/MattNot/Pythia">Pythia</a>: Pythia is a tool designed to automatically generate docstrings for Python files using the power of large language models (LLMs). By analyzing your Python code, Pythia  creates detailed and accurate documentation for functions and classes, making your code more understandable and maintainable.</li>
  <li><a href="https://github.com/MattNot/ASPIDEaS">ASPIDEaS</a>: an IDE as Service for Answer Set Programming.</li>
</ul>]]></content><author><name>Matteo Notaro</name><email>matteonotaro@outlook.it</email></author><category term="me" /><summary type="html"><![CDATA[Public projects where I’ve been involved Datagan: tool for generating synthetic data. Not in my possession right now. Pythia: Pythia is a tool designed to automatically generate docstrings for Python files using the power of large language models (LLMs). By analyzing your Python code, Pythia creates detailed and accurate documentation for functions and classes, making your code more understandable and maintainable. ASPIDEaS: an IDE as Service for Answer Set Programming.]]></summary></entry></feed>