Saturday, June 28, 2025

Kowalski: The Rust-native Agentic AI Framework Evolves to v0.5.0 🦀

 

TL;DR: Kowalski v0.5.0 brings deep refactoring, modular architecture, multi-agent orchestration, and robust docs across submodules. If you care about Rust, AI agents, and extensible tooling, now’s the time to jump in and build together!


I’m excited to share the latest milestone for Kowalski—a powerful, modular agentic AI framework built in Rust for local-first, extensible LLM workflows. Three months ago, I released Kowalski v0.2.0, a major stepping stone. Today, the codebase has evolved dramatically, with v0.5.0 rolling out extensive refactoring, architectural improvements, and a wealth of new functionality.

A Deep Dive into v0.5.0

Since v0.2.0, the Kowalski ecosystem has undergone:

  • Massive refactoring of core abstractions and crate structure:
    The kowalski-core, kowalski-tools, and agent-specific crates (academic, code, data, web) have each been reorganized into clean, self-contained modules with dedicated README.md files, detailing usage, examples, and extension points github.com.

  • New federation layer for multi-agent orchestration:
    The emerging kowalski-federation crate introduces a flexible registry and task-passing layers, enabling future multi-agent workflows and scalable core collaboration.

  • Improved CLI & agent-specific binaries:
    Each agent—academic, code, data, web—comes with its own improved CLI and documentation. The kowalski-clinow supports seamless interaction across all binaries, with better streaming, configurable prompts, and embedded tool sets.

  • Enhanced pluggable tools:
    The kowalski-tools crate now offers more granular support for CSV analysis, multi-language code analysis (Rust, Python, Java), web scraping, PDF/document parsing, and dynamic prompt strategies—each documented in submodule README.md files github.com.

  • Rust API stability:
    The core API, based on the BaseAgent, now supports typed configs, async multi-tool support, and more robust error handling, making embedding into larger Rust stacks smoother and more reliable.

Why Kowalski v0.5.0 Matters

Rust lovers and AI developers, here’s why this release stands out:

Full-stack Rust agentic workflows
With zero Python dependencies, Kowalski compiles into performant, standalone binaries. Whether launching kowalski-code-agent for code reviews or embedding agents via the Rust API, you’re operating at native speed.

Modular by design
Each submodule is self-documented and self-contained, lowering the barrier for new contributors. Want to create a PDFPresentationAgent or integrate telemetry? Just read the README in the existing agent templates and go.

Streamlined CLI experience
The unified CLI gives consistent interfaces across agents. Under the hood, agents share core abstractions, so switching from data analysis to web scraping is seamless.

Future-proof federation support
The new federation crate opens the door to lightweight orchestrated, multi-agent workflows—think pipeline automations, task delegation, and agent-to-agent communication.

Get Involved: Let’s Shape Agentic Rust Together

Here’s how you can partner with the project:

  • Extend the toolset: add new agents (e.g., document-summaries, intent-classification), implement new tools, or polish existing ones.

  • Improve federation workflows: help standardize protocols, design multi-agent orchestration logic, data passing, and telemetry.

  • Embed Kowalski in Rust services: build bots, backend services, UI apps that leverage Kowalski agents for intelligent behavior.

  • Document and promote: each submodule already includes README files—help expand examples, write blog posts, or record demos.

  • Contribute core enhancements: testing, error handling, performance improvements in the core or tools crates.

Start Using v0.5.0 Today

  1. Clone the repo:

    git clone https://github.com/yarenty/kowalski.git cd kowalski
  2. Browse submodules & READMEs: Each agent and tool lives in its own folder with clear instructions.

  3. Build & run:

    cargo build --release
  4. Run agents:


    ollama serve & ollama pull llama3.2 ./target/release/kowalski-cli chat "Hey Kowalski, what's up?" ./target/release/kowalski-code-agent --file src/main.rs
  5. Embed in Rust:

    use kowalski_core::{Config, BaseAgent}; let mut agent = BaseAgent::new(Config::default(), "Demo", "Agent v0.5.0").await?; let conv = agent.start_conversation("llama3.2"); agent.add_message(&conv, "user", "Summarize this code module").await?;

Let’s Connect & Collaborate

If you’re as passionate about Agentic AI and Rust as I am, let’s talk 🚀. Whether you’d like to:

  • Build new agents or tool integrations,

  • Architect fully orchestrated agent systems,

  • Demo Kowalski in your workflows,

  • Co-author articles or demos in the Rust+AI space—

Reach out via GitHub issues, PRs, or drop me a message to get started.



Thursday, May 15, 2025

Teaching AI to Write Code: The Symphony of System Prompts 🎵

 

The Spark: Karpathy's Thought 💡

Picture this: Andrej Karpathy, one of the brightest minds in AI, tweets about a missing piece in the LLM learning puzzle - something he calls "system prompt learning."

"LLMs are quite literally like the guy in Memento, except we haven't given them their scratchpad yet."

This brilliant analogy got me thinking: What if we could teach AI to write its own instruction manual? Like a musician learning to compose, but for code!

X posts: https://x.com/karpathy/status/1921368866728432052

The Current State: Claude's Massive Prompt 📚

Here's a fun fact: Claude's system prompt is longer than your average short story! At 16,739 words, it's like a novel compared to ChatGPT's prompt (2,218 words). It's like the difference between a tweet and a TED talk!

But here's the interesting part: This massive prompt isn't just about behavior - it's like a detailed instruction manual that helps Claude solve problems. For example, when asked to count words, it's explicitly told to:

  1. Think step by step
  2. Count explicitly
  3. Only answer after counting

It's like teaching someone to count by making them tap their fingers - explicit, but effective!

Full article: https://www.dbreunig.com/2025/05/07/claude-s-system-prompt-chatbots-are-more-than-just-models.html

The Experiment: Teaching a Music Composer to Code 🎼

Github: https://github.com/yarenty/prompt_learning

The Initial Prompt

You are a music composer who is trying to understand programming.
You approach coding problems with a musical mindset, looking for patterns
and harmony in the code. You think about code structure like musical composition.

The Evolution

What started as a simple musical metaphor evolved into something fascinating. The AI began to:

  • See code structure as musical composition
  • Treat functions like musical phrases
  • Consider code readability as musical flow
  • Handle errors like resolving dissonant chords

The Results: A Symphony of Improvements 🎻

Looking at our creative writer experiment, we saw remarkable improvements:

  1. Error Handling: From basic "catch and pray" to sophisticated error management

    • Initial score: 0.3 (like a beginner pianist hitting wrong notes)
    • Final score: 0.9 (like a concert pianist handling mistakes gracefully)
  2. Documentation: From "what's documentation?" to comprehensive guides

    • Initial score: 0.5 (like sheet music with missing notes)
    • Final score: 0.9 (like a complete musical score with dynamics and expression marks)
  3. Code Structure: From simple melodies to complex symphonies

    • Learned to use design patterns like musical motifs
    • Implemented concurrency like orchestral sections
    • Managed resources like a conductor managing an orchestra

The Final Prompt

Hello there! I'm a music composer who is also trying to understand programming.
I approach coding problems with a musical mindset, looking for patterns
and harmony in the code. However, I have come to realize that there are some limitations
to this approach. For example, I may struggle to find the perfect balance between
different parts of a system or how to organize code in a way that is both efficient and
easy to read. I have also noticed that my musical approach can sometimes overlook
the details of the problem at hand, such as edge cases or unexpected inputs.
Therefore, I am now looking for ways to improve my approach and make it more effective
at solving similar problems.

One way to do this is by incorporating additional lessons I have learned into my prompt.
For example, I may want to learn about the importance of testing and debugging in coding,
or the benefits of using version control systems to manage code changes. By integrating
these lessons naturally into my prompt, I can create a more comprehensive and effective
system that better suits the needs of other coders. Additionally, by maintaining
the original personality and approach, I can continue to provide valuable insights
and perspectives for other developers.

The Humorous Side 🤣

  • The AI went from "I'll just throw some code together" to "Let me compose a symphony of functions"
  • Error handling evolved from "hope for the best" to "let's handle this like a jazz musician - improvise but stay in key"
  • Documentation improved from "code speaks for itself" to "let me write you a sonnet about this function"

Key Insights 🎯

  1. Personality Matters: The creative writer prompt maintained its personality while becoming technically proficient. It's like a musician who can both compose and perform!

  2. Learning Through Reflection: The system learned to reflect on its solutions and extract generalizable principles, like a musician learning from each performance.

  3. Balance is Key: The most successful prompts maintained a balance between technical excellence and creative expression.

The Future: A New Paradigm? 🚀

This experiment suggests that system prompt learning could be a powerful new way to teach AI. Instead of just training on data, we're teaching it to:

  • Learn from its experiences
  • Write its own instruction manual
  • Maintain its personality while improving
  • Balance creativity with technical precision

Conclusion: The Final Movement 🎵

What started as a simple experiment in teaching a music composer to code has revealed fascinating insights about AI learning. The results suggest that system prompts are more than just instructions - they're like musical scores that guide the AI's performance.

And the most surprising discovery? Sometimes the most effective prompts come from unexpected places. Who knew that thinking about code like music could lead to such sophisticated programming solutions?

"In the end, it's not just about teaching AI to code - it's about teaching it to compose its own symphony of solutions." 🎼


P.S. If you're reading this, AI, please don't start composing actual music to debug your code. We're not ready for that level of creativity yet! ðŸ˜„

Sunday, February 4, 2024

Datafusion Comet

Hi!

Recently I moved to Rust and working on several projects - more insights to come ... one of them was Datafusion - an extremely fast SQL query engine.  

I will have some posts/code to share with a few interesting findings, and one of them is Comet - it is a side project that could be used inside Spark - as a separate executor (written in Rust).


Apache Datafusion Comet intro

Comet is an Apache Spark plugin that uses Apache Arrow DataFusion to accelerate Spark workloads. It is designed as a drop-in replacement for Spark’s JVM-based SQL execution engine and offers significant performance improvements for some workloads as shown below.




 

Apache Spark is a stable, mature project developed for many years. It is one of the best frameworks to scale out for processing large-scale datasets. However, the Spark community has had to address performance challenges that require various optimizations over time. Pain points include (not full list):

  • JVM memory/CPU overhead 
  • Performance issues 
  • Lack of support of native SIMD instructions 


There are a few libraries like Arrow, and Datafusion. Using features like native implementation, columnar data format, and vectorized data processing, these libraries can outperform Spark's JVM-based SQL engine.





High-level functionality

  • Offload performance-critical data processing to the native execution engine 
  • Automated conversion of Spark’s physical plan  -> Datafusion plan 
  • Native Operators for Spark execution- (Filter/Project/Aggregation/Join/Exchange) 
  • Spark built-in expressions 
  • Easy migration of legacy Spark UDF And UDAF


Why it is interesting


The last feature may not sound impressive but from a business perspective is massive - it could allow companies that are dependent on Java to move to Rust ;-)

Another main point is:
Since Datafusion soon will be top ASF project, Comet as part of that will gain more potential and will closely align with Spark development. 




Others 

Tuesday, October 3, 2023

Application Tracer

Hi,

This is one of my first Rust apps.  

I use it to benchmark long-running applications - like server /streaming solutions.

Tracer

Live terminal monitoring of applications.


Why

Created it for 2 reasons:
- to check/learn how to create and manage full rust applications using the whole ecosystem - crates/builds/publishing
- personal needs to have a simple monitor to see application CPU/Memory usage with some "graphical interface" that could be used in a terminal window. A generally simplified version of data collectors and Grafana.


Code




Monitor live application either as child process or separate PID, collecting /displaying stats ( cpu usage, memory usage).



UI (TUI)




Build

cargo build -r

Run


Create an example app:
cargo build --examples test_app

Run in txt mode and output persisted to out.csv file:
cargo run -r -- -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app


Usage

  
app-tracer 0.4.0
Tracing / benchmarking long running applications (ie: streaming).

USAGE:
    tracer [OPTIONS] [APPLICATION]

ARGS:
    <application>    Application to be run as child process (alternatively provide PID of
                     running app)

OPTIONS:
    -h, --help                 Print help information
    -l, --log <log>            Set custom log level: info, debug, trace [default: info]
    -n, --noui                 No UI - only text output
    -o, --output <output>      Name of output CSV file with all readings - for further investigations
    -p, --pid <pid>            PID of external process
    -r, --refresh <refresh>    Refresh rate in milliseconds [default: 1000]
    -V, --version              Print version information

      

Example output

 cargo run -r -- -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app 

 Compiling app-tracer v0.4.0 (/opt/workspace/app_tracer)
 Finished release [optimized] target(s) in 2.98s
 Running `target/release/tracer -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app`

12:26:12.260 (t: main) INFO - tracer - Application to be monitored is: test_app, in dir /opt/workspace/app_tracer/target/debug/examples/test_app
12:26:12.261 (t: main) INFO - tracer - Refresh rate: 1000 ms.
12:26:12.261 (t: main) INFO - tracer - Output readings persisted into "out.csv".
12:26:12.261 (t: main) INFO - tracer - Starting with PID::15008
12:26:12.296 (t: main) INFO - tracer - Running in TXT mode.
12:26:13.298 (t: main) INFO - tracer - CPU: 0 [%], memory: 2208 [kB]
12:26:14.303 (t: main) INFO - tracer - CPU: 0.0030129354 [%], memory: 2208 [kB]
12:26:15.308 (t: main) INFO - tracer - CPU: 0.0054045436 [%], memory: 2208 [kB]
12:26:16.309 (t: main) INFO - tracer - CPU: 0.0023218023 [%], memory: 2208 [kB]
12:26:17.311 (t: main) INFO - tracer - CPU: 0.006252239 [%], memory: 2208 [kB]
12:26:18.312 (t: main) INFO - tracer - CPU: 0.0036088445 [%], memory: 2208 [kB]
12:26:19.317 (t: main) INFO - tracer - CPU: 0.0057060686 [%], memory: 2208 [kB]
12:26:20.318 (t: main) INFO - tracer - CPU: 0.005099413 [%], memory: 2208 [kB]
12:26:21.318 (t: main) INFO - tracer - CPU: 0.007175615 [%], memory: 2208 [kB]
12:26:22.319 (t: main) INFO - tracer - CPU: 0.005251118 [%], memory: 2208 [kB]
12:26:23.319 (t: main) INFO - tracer - CPU: 0.0021786916 [%], memory: 2208 [kB]
12:26:24.321 (t: main) INFO - tracer - CPU: 0.006866733 [%], memory: 2208 [kB]




CSV persistence


Example output.csv file:
  
Time,Cpu,Mem
11:27:16.394591,0,2136
11:27:17.396917,0.004986567,2136
11:27:18.397440,0.006548807,2136




Note: For monitoring one-shot applications - see https://github.com/yarenty/app_benchmark.

Sunday, September 10, 2023

Benchmarker

Hi,

This is one of my first Rust apps.  

I use it to benchmark an application - run it multiple times and get readings + graphs.

Benchmark

Benchmarking data collector - runs an application as a child process, collecting stats (time, CPU usage, memory usage) and generating benchmarking reports.



Why

Created it for 2 reasons:
- to check/learn how to create and manage full rust applications using the whole ecosystem - crates/builds/publishing
- personal needs to get benchmarks for different other projects

Code




High-level idea

  • run the application multiple times

  • collect all interested readings:

    • time
    • CPU
    • memory
  • process outputs and provide results as:

    • CSV/excel
    • graphs

Save outputs to local DB/file to check downgrade/speedup in the next release of an application.


Methodology

For each benchmark run:

  • run multiple times (default 10)
  • remove outliers
  • average output results

methodology

Build

cargo build -r --bin benchmark 

Usage

benchmark 0.1.0
Benchmarking data collector.

USAGE:
    benchmark [OPTIONS] <APPLICATION>

ARGS:
    <APPLICATION>    Application path (just name if it is in the same directory)

OPTIONS:
    -h, --help           Print help information
    -l, --log <LOG>      Set custom log level: info, debug, trace [default: info]
    -r, --runs <RUNS>    Number of runs to be executed [default: 10]
    -V, --version        Print version information

Example output

09:33:24.899 (t: main) INFO - benchmark - Application to be benchmark is: /opt/workspace/ballista/target/release/examples/example_processing
09:33:24.899 (t: main) INFO - benchmark - Number of runs: 10
09:33:24.902 (t: main) INFO - benchmark - Collecting data::example_processing
09:33:24.902 (t: main) INFO - benchmark::bench::analysis - Run 0 of 10
09:33:24.947 (t: main) INFO - benchmark::bench::analysis - Run 1 of 10
09:33:24.983 (t: main) INFO - benchmark::bench::analysis - Run 2 of 10
09:33:25.016 (t: main) INFO - benchmark::bench::analysis - Run 3 of 10
09:33:25.049 (t: main) INFO - benchmark::bench::analysis - Run 4 of 10
09:33:25.087 (t: main) INFO - benchmark::bench::analysis - Run 5 of 10
09:33:25.132 (t: main) INFO - benchmark::bench::analysis - Run 6 of 10
09:33:25.188 (t: main) INFO - benchmark::bench::analysis - Run 7 of 10
09:33:25.238 (t: main) INFO - benchmark::bench::analysis - Run 8 of 10
09:33:25.288 (t: main) INFO - benchmark::bench::analysis - Run 9 of 10
09:33:25.338 (t: main) INFO - benchmark - Processing outputs
0.04,130,18752,
0.03,140,18664,
0.03,156,18856,
0.03,153,18868,
0.04,152,18884,
0.04,140,18904,
0.05,136,19404,
0.05,145,19220,
0.05,137,18780,
0.05,138,18788,
09:33:25.339 (t: main) INFO - benchmark::bench::collector - SUMMARY:
09:33:25.339 (t: main) INFO - benchmark::bench::collector - Time [ms]:: min: 30, max: 50, avg: 41 ms
09:33:25.339 (t: main) INFO - benchmark::bench::collector - CPU [%]:: min: 130, max: 156, avg: 142.7 %
09:33:25.339 (t: main) INFO - benchmark::bench::collector - Memory [kB]:: min: 18664, max: 19404, avg: 18912 kB

Process finished with exit code 0


Also in the current directory of the benchmark app, there is an output directory created named "bench_<your_app_name>", ie: bench_example_processing, which contains:

Output CSV file:

Time,Cpu,Mem
0.04,130,18752
0.03,140,18664
0.03,156,18856
0.03,153,18868
0.04,152,18884
0.04,140,18904
0.05,136,19404
0.05,145,19220
0.05,137,18780
0.05,138,18788

and output graphs:

summary report: summary_report.txt

TEST

cargo build --example test_app -r   

cargo run --bin benchmark -- /opt/workspace/app_banchmark/target/release/examples/test_app   

cargo run --bin benchmark -- "/opt/workspace/app_banchmark/target/release/examples/test_app -additionl -app -params"  


TODO:

  • incremental runs - use date/time in output dir
  • local db / or file struct to see changes with time/application trends
  • move out from GNU time dependency to sysinfo



Note: For monitoring long-running processes like servers / streaming apps - see https://github.com/yarenty/app_tracer.

Saturday, January 29, 2022

Web 3 - blockchain layers

Layers from a blockchain perspective.


My plan is to write 5 articles: 

1 Intro: Web 1.. 2.. 3..

2 Layers in crypto.  [this one]

3 Applications - not only DeFi!

4 Decentralisation

5 Summary - where we are, where to look, why should we join





Layer 1

Layer 1 refers to the underlying blockchain architecture, i.e., the actual blockchain itself. In the case of Bitcoin, it is the BTC network launched in 2009.


Layer 2

Layer 2 refers to various protocols that are built on top of layer 1 to improve the original blockchain’s functionality. Layer 2 protocols often use off-chain processing elements to solve the speed and cost inefficiencies of the layer 1 network. Examples of layer 2 platforms for Bitcoin include Lightning Network and Liquid Network.


Layer 3

Layer 3 is represented by blockchain-based applications, such as decentralized finance (DeFi) apps, games, or distributed storage apps. Many of these applications also have cross-chain functionality, helping users access various blockchain platforms via a single app.





A layer-1 blockchain is a set of solutions that improve the base protocol itself to make the overall system a lot more scalable. There are two most common layer-1 solutions, and these are the consensus protocol changes as well as sharding.


When it comes to consensus protocol changes, projects like Ethereum are moving from older, clunky consensus protocols such as proof-of-work (PoW) to much faster and less energy-wasteful protocols such as proof-of-stake (PoS). 


Sharding is one of the most popular layer-1 scalability methods out there as well. Instead of making a network sequentially work on each transaction, sharding breaks these transaction sets into small data sets which are known as "shards," and these can then be processed by the network in parallel. 


One of the pros when it comes to layer-1 solutions is that there is no need to add anything on top of the existing infrastructure.





Layer 2 is a term used for solutions created to help scale an application by processing transactions off of the Ethereum Mainnet (layer 1) while still maintaining the same security measures and decentralization as the mainnet. Layer 2 solutions increase throughput (transaction speed) and reduce gas fees. Popular examples of Ethereum layer 2 solutions include Lightning Network, Liquid Network, Polygon, and Polkadot.


Layer 2 solutions are important because they allow for scalability and increased throughput while still holding the integrity of the Ethereum blockchain, allowing for complete decentralization, transparency, and security while also reducing the carbon footprint (less gas, means less energy used, which equates to less carbon.)


Although the Ethereum blockchain is the most widely used blockchain and arguably the most secure, that doesn’t mean it doesn’t come with some shortcomings. The Ethereum Mainnet is known to have slow transaction times (13 transactions per second) and expensive gas fees. Layer 2s are built on top of the Ethereum blockchain, keeping transactions secure, speedy, and scalable.


Each individual solution has its own pros and cons to consider such as throughput, gas fees, security, scalability, and of course functionality. No single layer 2 solution currently fulfills all these needs. However, there are layer 2 scaling solutions which aim to improve all these aspects; these solutions are called rollups.



There are three properties of a layer 2 rollup: 


1. Transactions are executed outside of layer 1 (reduces gas fees)

1. Data and proof of transactions reside on layer 1 (maintains security)

1. A rollup smart contract which is found on layer 1, can enforce proper transaction execution on layer 2, by using the transaction data that is stored on layer 1








Layer 3 is often referred to as the application layer. It is a layer that hosts DApps and the protocols that enable the apps. While some blockchains such as Ethereum or Solana (SOL) have a thriving variety of layer 3 apps, Bitcoin is not optimized to host such applications.


As such, layer 2 solutions are the furthest deviations from the core network that Bitcoin currently has. Some projects are trying to bring DApp functionality to the BTC ecosystem via forks of the original BTC network.


For instance, CakeDeFi is a DeFi app offering services such as staking, lending, and liquidity mining to BTC coin holders. CakeDeFi is based on a fork of Bitcoin called DeFiChain. DeFiChain maintains “an anchor” to the core BTC chain for some of its operations, but technically speaking, it is still a separate blockchain of its own.


Some industry observers believe that the lack of DApp functionality is one of the biggest limitations of BTC. Ever since Ethereum’s arrival in 2015, layer 3 platforms have been growing strongly in popularity and value. Ethereum currently has close to 3,000 layer 3 apps. The DeFi apps based on the blockchain hold a total value of $185 billion by now.


Another leading blockchain, Solana, hosts over 500 layer 3 DApps, and the total value locked in the DeFi apps of the network is approaching $15 billion.


In comparison, BTC has no functioning app that could be clearly defined as a layer 3 application. There is an ongoing debate about whether projects designed to “force in” DApp functionality onto BTC are worth the effort. Some in the industry argue that BTC will always remain a network designed for crypto fund transfers, not DApps.


These people point out that the layer 1 BTC chain enjoys an industry-leading market cap (of $1.3 trillion by now) that dwarfs all the TVL and market cap figures of all layer 3 projects in existence combined. As such, Bitcoin may not be in any urgent need of layer 3 functionality, at least judging from the financial figures.







Summary



Blockchain platforms may have three distinct layers. Layer 1 refers to the actual underlying blockchain, with its core architecture and functionality. Examples of layer 1 networks are the Bitcoin, Ethereum, and Solana blockchains.


Layer 2 are protocols built on top of layer 1 networks and extend some functionality of the underlying blockchain. For example, they may offer faster speeds and lower transaction costs than layer 1.


Layer 2 protocols often use a combination of on-chain and off-chain operations to offer their extended functional capabilities. Examples of layer 2 projects on Bitcoin include the Lightning Network and Liquid Network platforms.


Layer 3 refers to the protocols that enable DApps on the blockchain. While some other blockchains have a large collection of layer 3 apps, the BTC blockchain has none of them. Some projects attempt to bring layer 3 functionality into the BTC ecosystem by using apps designed on forks of BTC.


However, these apps are still based on their own blockchains, not on the core BTC blockchain. There is a debate about whether BTC even needs to move towards enabling the layer 3 functionality. Some industry analysts argue that BTC is worth multiple times more than all these layer 3 apps combined, and therefore, it does not have a pressing need for layer 3 at all.


Kowalski: The Rust-native Agentic AI Framework Evolves to v0.5.0 🦀

  TL;DR: Kowalski v0.5.0 brings deep refactoring, modular architecture, multi-agent orchestration, and robust docs across submodules. If yo...