Tuesday, October 3, 2023

Application Tracer

Hi,

This is one of my first Rust apps.  

I use it to benchmark long-running applications - like server /streaming solutions.

Tracer

Live terminal monitoring of applications.


Why

Created it for 2 reasons:
- to check/learn how to create and manage full rust applications using the whole ecosystem - crates/builds/publishing
- personal needs to have a simple monitor to see application CPU/Memory usage with some "graphical interface" that could be used in a terminal window. A generally simplified version of data collectors and Grafana.


Code




Monitor live application either as child process or separate PID, collecting /displaying stats ( cpu usage, memory usage).



UI (TUI)




Build

cargo build -r

Run


Create an example app:
cargo build --examples test_app

Run in txt mode and output persisted to out.csv file:
cargo run -r -- -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app


Usage

  
app-tracer 0.4.0
Tracing / benchmarking long running applications (ie: streaming).

USAGE:
    tracer [OPTIONS] [APPLICATION]

ARGS:
    <application>    Application to be run as child process (alternatively provide PID of
                     running app)

OPTIONS:
    -h, --help                 Print help information
    -l, --log <log>            Set custom log level: info, debug, trace [default: info]
    -n, --noui                 No UI - only text output
    -o, --output <output>      Name of output CSV file with all readings - for further investigations
    -p, --pid <pid>            PID of external process
    -r, --refresh <refresh>    Refresh rate in milliseconds [default: 1000]
    -V, --version              Print version information

      

Example output

 cargo run -r -- -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app 

 Compiling app-tracer v0.4.0 (/opt/workspace/app_tracer)
 Finished release [optimized] target(s) in 2.98s
 Running `target/release/tracer -n -o out.csv /opt/workspace/app_tracer/target/debug/examples/test_app`

12:26:12.260 (t: main) INFO - tracer - Application to be monitored is: test_app, in dir /opt/workspace/app_tracer/target/debug/examples/test_app
12:26:12.261 (t: main) INFO - tracer - Refresh rate: 1000 ms.
12:26:12.261 (t: main) INFO - tracer - Output readings persisted into "out.csv".
12:26:12.261 (t: main) INFO - tracer - Starting with PID::15008
12:26:12.296 (t: main) INFO - tracer - Running in TXT mode.
12:26:13.298 (t: main) INFO - tracer - CPU: 0 [%], memory: 2208 [kB]
12:26:14.303 (t: main) INFO - tracer - CPU: 0.0030129354 [%], memory: 2208 [kB]
12:26:15.308 (t: main) INFO - tracer - CPU: 0.0054045436 [%], memory: 2208 [kB]
12:26:16.309 (t: main) INFO - tracer - CPU: 0.0023218023 [%], memory: 2208 [kB]
12:26:17.311 (t: main) INFO - tracer - CPU: 0.006252239 [%], memory: 2208 [kB]
12:26:18.312 (t: main) INFO - tracer - CPU: 0.0036088445 [%], memory: 2208 [kB]
12:26:19.317 (t: main) INFO - tracer - CPU: 0.0057060686 [%], memory: 2208 [kB]
12:26:20.318 (t: main) INFO - tracer - CPU: 0.005099413 [%], memory: 2208 [kB]
12:26:21.318 (t: main) INFO - tracer - CPU: 0.007175615 [%], memory: 2208 [kB]
12:26:22.319 (t: main) INFO - tracer - CPU: 0.005251118 [%], memory: 2208 [kB]
12:26:23.319 (t: main) INFO - tracer - CPU: 0.0021786916 [%], memory: 2208 [kB]
12:26:24.321 (t: main) INFO - tracer - CPU: 0.006866733 [%], memory: 2208 [kB]




CSV persistence


Example output.csv file:
  
Time,Cpu,Mem
11:27:16.394591,0,2136
11:27:17.396917,0.004986567,2136
11:27:18.397440,0.006548807,2136




Note: For monitoring one-shot applications - see https://github.com/yarenty/app_benchmark.

Sunday, September 10, 2023

Benchmarker

Hi,

This is one of my first Rust apps.  

I use it to benchmark an application - run it multiple times and get readings + graphs.

Benchmark

Benchmarking data collector - runs an application as a child process, collecting stats (time, CPU usage, memory usage) and generating benchmarking reports.



Why

Created it for 2 reasons:
- to check/learn how to create and manage full rust applications using the whole ecosystem - crates/builds/publishing
- personal needs to get benchmarks for different other projects

Code




High-level idea

  • run the application multiple times

  • collect all interested readings:

    • time
    • CPU
    • memory
  • process outputs and provide results as:

    • CSV/excel
    • graphs

Save outputs to local DB/file to check downgrade/speedup in the next release of an application.


Methodology

For each benchmark run:

  • run multiple times (default 10)
  • remove outliers
  • average output results

methodology

Build

cargo build -r --bin benchmark 

Usage

benchmark 0.1.0
Benchmarking data collector.

USAGE:
    benchmark [OPTIONS] <APPLICATION>

ARGS:
    <APPLICATION>    Application path (just name if it is in the same directory)

OPTIONS:
    -h, --help           Print help information
    -l, --log <LOG>      Set custom log level: info, debug, trace [default: info]
    -r, --runs <RUNS>    Number of runs to be executed [default: 10]
    -V, --version        Print version information

Example output

09:33:24.899 (t: main) INFO - benchmark - Application to be benchmark is: /opt/workspace/ballista/target/release/examples/example_processing
09:33:24.899 (t: main) INFO - benchmark - Number of runs: 10
09:33:24.902 (t: main) INFO - benchmark - Collecting data::example_processing
09:33:24.902 (t: main) INFO - benchmark::bench::analysis - Run 0 of 10
09:33:24.947 (t: main) INFO - benchmark::bench::analysis - Run 1 of 10
09:33:24.983 (t: main) INFO - benchmark::bench::analysis - Run 2 of 10
09:33:25.016 (t: main) INFO - benchmark::bench::analysis - Run 3 of 10
09:33:25.049 (t: main) INFO - benchmark::bench::analysis - Run 4 of 10
09:33:25.087 (t: main) INFO - benchmark::bench::analysis - Run 5 of 10
09:33:25.132 (t: main) INFO - benchmark::bench::analysis - Run 6 of 10
09:33:25.188 (t: main) INFO - benchmark::bench::analysis - Run 7 of 10
09:33:25.238 (t: main) INFO - benchmark::bench::analysis - Run 8 of 10
09:33:25.288 (t: main) INFO - benchmark::bench::analysis - Run 9 of 10
09:33:25.338 (t: main) INFO - benchmark - Processing outputs
0.04,130,18752,
0.03,140,18664,
0.03,156,18856,
0.03,153,18868,
0.04,152,18884,
0.04,140,18904,
0.05,136,19404,
0.05,145,19220,
0.05,137,18780,
0.05,138,18788,
09:33:25.339 (t: main) INFO - benchmark::bench::collector - SUMMARY:
09:33:25.339 (t: main) INFO - benchmark::bench::collector - Time [ms]:: min: 30, max: 50, avg: 41 ms
09:33:25.339 (t: main) INFO - benchmark::bench::collector - CPU [%]:: min: 130, max: 156, avg: 142.7 %
09:33:25.339 (t: main) INFO - benchmark::bench::collector - Memory [kB]:: min: 18664, max: 19404, avg: 18912 kB

Process finished with exit code 0


Also in the current directory of the benchmark app, there is an output directory created named "bench_<your_app_name>", ie: bench_example_processing, which contains:

Output CSV file:

Time,Cpu,Mem
0.04,130,18752
0.03,140,18664
0.03,156,18856
0.03,153,18868
0.04,152,18884
0.04,140,18904
0.05,136,19404
0.05,145,19220
0.05,137,18780
0.05,138,18788

and output graphs:

summary report: summary_report.txt

TEST

cargo build --example test_app -r   

cargo run --bin benchmark -- /opt/workspace/app_banchmark/target/release/examples/test_app   

cargo run --bin benchmark -- "/opt/workspace/app_banchmark/target/release/examples/test_app -additionl -app -params"  


TODO:

  • incremental runs - use date/time in output dir
  • local db / or file struct to see changes with time/application trends
  • move out from GNU time dependency to sysinfo



Note: For monitoring long-running processes like servers / streaming apps - see https://github.com/yarenty/app_tracer.

Datafusion Comet

Hi! Recently I moved to Rust and working on several projects - more insights to come ... one of them was Datafusion - an extremely fast S...