Skip to content

fix(network): no proxying of localhost connections by default #564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 24, 2025

Conversation

3nprob
Copy link

@3nprob 3nprob commented Mar 20, 2025

  • fix: Set NO_PROXY=127.0.0.1 for pacman and gentoo package managers, to avoid proxying connections to repos already accessed over localhost.
  • feat: Allow overriding ALL_PROXY and NO_PROXY environment variables

Related:

Copy link

codecov bot commented Mar 20, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.68%. Comparing base (13295f7) to head (46e2fc4).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #564      +/-   ##
==========================================
- Coverage   71.10%   70.68%   -0.42%     
==========================================
  Files           3        3              
  Lines         481      481              
==========================================
- Hits          342      340       -2     
- Misses        139      141       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@@ -155,6 +155,7 @@ fi
if [ -e /etc/portage/make.conf ]; then
update_conf /etc/portage/make.conf "http_proxy=\"$PROXY_ADDR\"
https_proxy=\"$PROXY_ADDR\"
no_proxy=\"${NO_PROXY:-127.0.0.1}\"
Copy link
Member

@marmarek marmarek Mar 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see it documented in make.conf(5) man page, do you have any source for this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not documented on that manpage but wget does support it (see gitlab link in separate reply).

@marmarek
Copy link
Member

marmarek commented Mar 21, 2025

Maybe stupid question, but you want to access a repo that is on "localhost" in the template, right? Why not simply configure it as file:// instead of http://localhost/ ?

@marmarek
Copy link
Member

But if you want to change that anyway, tests will need an update - currently they rely on accessing localhost repo via the proxy (and this is actually how it checks that proxy gets used). I wonder if anybody else relies on this feature...

@3nprob
Copy link
Author

3nprob commented Mar 21, 2025

Maybe stupid question, but you want to access a repo that is on "localhost" in the template, right? Why not simply configure it as file:// instead of http://localhost/ ?

Well, it's not actually running from the local filesystem just because it's listening on 127.0.0.1... Much like the one listening on 8082. In my case it's running in a separate dedicated AppVM proxied over qrexec/qubes.ConnectTCP.

But even if one is doing it from the local filesystem in the builder, again, we often run this script inside ephemeral containers or disposables, so the filesystem is seldom convenient.

@3nprob
Copy link
Author

3nprob commented Mar 21, 2025

But if you want to change that anyway, tests will need an update - currently they rely on accessing localhost repo via the proxy (and this is actually how it checks that proxy gets used). I wonder if anybody else relies on this feature...

Do you think it would be enough to make the tests use an alternative loopback address (127.0.1.2 or whatever) and keep the NO_PROXY default to 127.0.0.1? I would think that using non-standard loopback addresses is about the same level of esoteric as proxying localhost, considering the path-of-least-surprise...

@marmarek
Copy link
Member

No issue in the builder, but here it's about behavior of the final template, so the blast radius of a behavior change is bigger. What about having that qrexec proxy from updates proxy instead? It gives a bit more control to the updates proxy, but don't really need to use sys-net for it - it can be also a dedicated qube.

Do you think it would be enough to make the tests use an alternative loopback address (127.0.1.2 or whatever)

That may be a good idea anyway.

@3nprob
Copy link
Author

3nprob commented Mar 21, 2025

What about having that qrexec proxy from updates proxy instead?

Yeah, but:

  • It increases overhead and is less efficient. For a caching proxy it will redundantly cache the local packages unless explicitly overridden (finicky) and take more time+space.
    • It also introduces another potential failure point as pacman doesn't do retries
  • I may want all my qubes to use the update proxy for public repos but restrict access of private repo to certain domains. Running two separate updates-proxies and updating the configuration accordingly is not a good time.

I'm not sure I fully understand the concern/risk involved here.. Considering the update proxy out of the box will 403 these localhost connections, what is a (hypothetical/real) scenario this might be breaking for existing user?

@marmarek
Copy link
Member

Considering the update proxy out of the box will 403 these localhost connections

Ah, right. Then this change should be okay. Just update the tests first to use different loopback address.

@3nprob
Copy link
Author

3nprob commented Mar 21, 2025

Considering the update proxy out of the box will 403 these localhost connections

Ah, right. Then this change should be okay. Just update the tests first to use different loopback address.

Sure! Would that be just the tests in this repo triggered here or any integration tests elsewhere to tackle?

@marmarek
Copy link
Member

integration tests elsewhere to tackle?

At least here: https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/tests/integ/vm_update.py#L496-L538

@3nprob
Copy link
Author

3nprob commented Mar 22, 2025

integration tests elsewhere to tackle?

At least here: https://github.com/QubesOS/qubes-core-admin/blob/main/qubes/tests/integ/vm_update.py#L496-L538

So TBH I still haven't figured out how to make sure my test runs are using the fork 100%... At least I have those same tests passing with/without setting the env var when running them.

From what I've seen so far, no changes should be needed there. Since they refer to localhost (and not 127.0.0.1), requests to those repos are still passed through the proxy after the change as the interception happens before translation to IP addresses.

Should be the same as with curl, I believe:

$ NO_PROXY=127.0.0.1 http_proxy=http://127.0.0.1:8082 curl http://127.0.0.1:1234 >/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 127.0.0.1 port 1234 after 1 ms: Couldn't connect to server

vs

$ NO_PROXY=localhost http_proxy=http://127.0.0.1:8082 curl http://127.0.0.1:1234 >/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   510    0   510    0     0  38531      0 --:--:-- --:--:-- --:--:-- 39230

@3nprob
Copy link
Author

3nprob commented Mar 22, 2025

Wait, so requests to 127.0.0.1 in the proxy are blocked while localhost allowed...? Is that intentional?

https://github.com/QubesOS/qubes-core-agent-linux/blob/main/network/updates-blacklist

$ http_proxy=http://127.0.0.1:8082 curl -v http://localhost:18080/hello > /dev/null
* Uses proxy env variable http_proxy == 'http://127.0.0.1:8082'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 127.0.0.1:8082...
* Connected to 127.0.0.1 (127.0.0.1) port 8082
> GET http://localhost:18080/hello HTTP/1.1
> Host: localhost:18080
> User-Agent: curl/8.6.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 OK
< Content-Type: application/octet-stream
< ETag: "4061447778"
< Last-Modified: Sat, 08 Mar 2025 04:54:54 GMT
< Content-Length: 0
< Accept-Ranges: bytes
< Date: Sat, 22 Mar 2025 12:24:47 GMT
< Server: lighttpd/1.4.69
<
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #0 to host 127.0.0.1 left intact

vs

$ http_proxy=http://127.0.0.1:8082 curl -v http://127.0.0.1:18080/hello > /dev/null
* Uses proxy env variable http_proxy == 'http://127.0.0.1:8082'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 127.0.0.1:8082...
* Connected to 127.0.0.1 (127.0.0.1) port 8082
> GET http://127.0.0.1:18080/hello HTTP/1.1
> Host: 127.0.0.1:18080
> User-Agent: curl/8.6.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Filtered
< Server: tinyproxy/1.11.2
< Content-Type: text/html
< Connection: close
<
{ [510 bytes data]
100   510    0   510    0     0   207k      0 --:--:-- --:--:-- --:--:--  249k
* Closing connection

@marmarek
Copy link
Member

Ok, then I guess no change is needed (will test).
As for blocking only 127.0.0.1, the only reason for that entry, is to show error page with special tag for whonix, see 6139ed5

@3nprob
Copy link
Author

3nprob commented Mar 22, 2025

Ok, then I guess no change is needed (will test).

Ty!

As for blocking only 127.0.0.1, the only reason for that entry, is to show error page with special tag for whonix, see 6139ed5

Maybe it could be filtered only on whonix, then?

Whonix/qubes-whonix#3

(I guess that patch would solve my specific use-case separately from the NO_PROXY addition here if followed up with removing that tinyproxy filter from vanilla debian/fedora - but I still think both make sense in any case)


slightly OT...

The way this is set up looks like it could give some users the wrong idea resulting in dangerous behavior. Imagine a scenario where user...

EDIT: nvm

@marmarek
Copy link
Member

slightly OT...

There are some corner cases here indeed, but also updates proxy is allowed only for templates by default, so it's pretty limited.

@3nprob
Copy link
Author

3nprob commented Mar 22, 2025

slightly OT...

There are some corner cases here indeed, but also updates proxy is allowed only for templates by default, so it's pretty limited.

FWIW the hypothetical scenario I described is also not valid because sys-whonix tinyproxy has an upstream of 127.0.0.1:9104 (tor SocksPort) by qubes-whonix.

@qubesos-bot
Copy link

qubesos-bot commented Mar 24, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025032406-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025031804-4.3&flavor=update

  • system_tests_extra

    • TC_00_QVCTest_whonix-gateway-17: test_010_screenshare (failure)
      ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 0 == 0
  • system_tests_network_updates

  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio

    • TC_20_AudioVM_Pulse_debian-12-xfce: test_223_audio_play_hvm (failure)
      AssertionError: only silence detected, no useful audio data
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Explorer-empt...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/dcWzE-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_basic_vm_qrexec_gui_ext4

  • system_tests_basic_vm_qrexec_gui_xfs

  • system_tests_suspend@hw1

    • suspend: wait_serial (wait serial expected)
      # wait_serial expected: qr/p5~T5-\d+-/...

    • suspend: Failed (test died + timed out)
      # Test died: command 'true' timed out at /usr/lib/os-autoinst/autot...

Failed tests

14 failures
  • system_tests_extra

    • TC_00_QVCTest_whonix-gateway-17: test_010_screenshare (failure)
      ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 0 == 0
  • system_tests_network_updates

  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio

    • TC_20_AudioVM_Pulse_debian-12-xfce: test_223_audio_play_hvm (failure)
      AssertionError: only silence detected, no useful audio data
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Explorer-empt...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/dcWzE-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_basic_vm_qrexec_gui_ext4

  • system_tests_basic_vm_qrexec_gui_xfs

  • system_tests_suspend@hw1

    • suspend: wait_serial (wait serial expected)
      # wait_serial expected: qr/p5~T5-\d+-/...

    • suspend: Failed (test died + timed out)
      # Test died: command 'true' timed out at /usr/lib/os-autoinst/autot...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/132953#dependencies

14 fixed
  • system_tests_suspend

    • suspend: unnamed test (unknown)
    • suspend: Failed (test died)
      # Test died: no candidate needle with tag(s) 'SUSPEND-FAILED' match...
  • system_tests_basic_vm_qrexec_gui

  • system_tests_qrexec

  • system_tests_kde_gui_interactive

    • clipboard_and_web: unnamed test (unknown)

    • clipboard_and_web: Failed (test died)
      # Test died: no candidate needle with tag(s) 'qubes-website' matche...

    • clipboard_and_web: wait_serial (wait serial expected)
      # wait_serial expected: "lspci; echo 2E8vz-\$?-"...

  • system_tests_guivm_vnc_gui_interactive

    • gui_filecopy: unnamed test (unknown)
    • gui_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'files-work' matched...
  • system_tests_audio

  • system_tests_whonix@hw7

    • whonixcheck: fail (unknown)
      Whonixcheck for sys-whonix failed...

    • whonixcheck: unnamed test (unknown)

  • system_tests_whonix

    • whonixcheck: fail (unknown)
      Whonixcheck for sys-whonix failed...

    • whonixcheck: unnamed test (unknown)

Unstable tests

Performance Tests

Performance degradation:

17 performance degradations
  • whonix-gateway-17_exec: 7.82 :small_red_triangle_up: ( previous job: 6.82, degradation: 114.66%)
  • whonix-gateway-17_socket: 8.93 :small_red_triangle_up: ( previous job: 7.24, degradation: 123.43%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 98442.00 :small_red_triangle_up: ( previous job: 129298.00, degradation: 76.14%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 118893.00 :small_red_triangle_up: ( previous job: 294295.00, degradation: 40.40%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 12438.00 :small_red_triangle_up: ( previous job: 95454.00, degradation: 13.03%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 21883.00 :small_red_triangle_up: ( previous job: 79803.00, degradation: 27.42%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8570.00 :small_red_triangle_up: ( previous job: 10795.00, degradation: 79.39%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 142323.00 :small_red_triangle_up: ( previous job: 250795.00, degradation: 56.75%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 156768.00 :small_red_triangle_up: ( previous job: 184752.00, degradation: 84.85%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4090.00 :small_red_triangle_up: ( previous job: 4903.00, degradation: 83.42%)
  • fedora-41-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 96147.00 :small_red_triangle_up: ( previous job: 162081.00, degradation: 59.32%)
  • fedora-41-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 64000.00 :small_red_triangle_up: ( previous job: 87940.00, degradation: 72.78%)
  • fedora-41-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 2097.00 :small_red_triangle_up: ( previous job: 3599.00, degradation: 58.27%)
  • fedora-41-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 137891.00 :small_red_triangle_up: ( previous job: 170062.00, degradation: 81.08%)
  • fedora-41-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 292898.00 :small_red_triangle_up: ( previous job: 334687.00, degradation: 87.51%)
  • fedora-41-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 285949.00 :small_red_triangle_up: ( previous job: 324737.00, degradation: 88.06%)
  • fedora-41-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 2327.00 :small_red_triangle_up: ( previous job: 5672.00, degradation: 41.03%)

Remaining performance tests:

55 tests
  • debian-12-xfce_exec: 7.33 :small_red_triangle_up: ( previous job: 7.12, degradation: 102.98%)
  • debian-12-xfce_exec-root: 28.63 🟢 ( previous job: 28.65, improvement: 99.91%)
  • debian-12-xfce_socket: 8.18 🟢 ( previous job: 8.60, improvement: 95.06%)
  • debian-12-xfce_socket-root: 9.02 :small_red_triangle_up: ( previous job: 8.52, degradation: 105.81%)
  • debian-12-xfce_exec-data-simplex: 67.78 🟢 ( previous job: 71.62, improvement: 94.64%)
  • debian-12-xfce_exec-data-duplex: 66.41 🟢 ( previous job: 70.34, improvement: 94.41%)
  • debian-12-xfce_exec-data-duplex-root: 83.96 :small_red_triangle_up: ( previous job: 82.72, degradation: 101.50%)
  • debian-12-xfce_socket-data-duplex: 163.00 :small_red_triangle_up: ( previous job: 156.96, degradation: 103.85%)
  • fedora-41-xfce_exec: 9.41 :small_red_triangle_up: ( previous job: 9.27, degradation: 101.49%)
  • fedora-41-xfce_exec-root: 61.93 :small_red_triangle_up: ( previous job: 61.51, degradation: 100.68%)
  • fedora-41-xfce_socket: 8.49 🟢 ( previous job: 8.63, improvement: 98.42%)
  • fedora-41-xfce_socket-root: 8.70 🟢 ( previous job: 8.71, improvement: 99.96%)
  • fedora-41-xfce_exec-data-simplex: 71.65 🟢 ( previous job: 75.53, improvement: 94.86%)
  • fedora-41-xfce_exec-data-duplex: 69.76 🟢 ( previous job: 71.56, improvement: 97.48%)
  • fedora-41-xfce_exec-data-duplex-root: 105.45 🟢 ( previous job: 109.13, improvement: 96.63%)
  • fedora-41-xfce_socket-data-duplex: 149.85 🟢 ( previous job: 150.61, improvement: 99.50%)
  • whonix-gateway-17_exec-root: 40.02 🟢 ( previous job: 40.43, improvement: 98.98%)
  • whonix-gateway-17_socket-root: 7.92 :small_red_triangle_up: ( previous job: 7.65, degradation: 103.53%)
  • whonix-gateway-17_exec-data-simplex: 72.80 🟢 ( previous job: 78.32, improvement: 92.95%)
  • whonix-gateway-17_exec-data-duplex: 75.22 🟢 ( previous job: 76.65, improvement: 98.13%)
  • whonix-gateway-17_exec-data-duplex-root: 96.28 :small_red_triangle_up: ( previous job: 88.52, degradation: 108.77%)
  • whonix-gateway-17_socket-data-duplex: 151.21 🟢 ( previous job: 171.76, improvement: 88.03%)
  • whonix-workstation-17_exec: 8.06 :small_red_triangle_up: ( previous job: 7.67, degradation: 105.09%)
  • whonix-workstation-17_exec-root: 57.60 🟢 ( previous job: 58.26, improvement: 98.86%)
  • whonix-workstation-17_socket: 8.14 🟢 ( previous job: 8.19, improvement: 99.31%)
  • whonix-workstation-17_socket-root: 8.12 🟢 ( previous job: 8.13, improvement: 99.84%)
  • whonix-workstation-17_exec-data-simplex: 71.34 🟢 ( previous job: 74.99, improvement: 95.12%)
  • whonix-workstation-17_exec-data-duplex: 63.84 🟢 ( previous job: 72.71, improvement: 87.80%)
  • whonix-workstation-17_exec-data-duplex-root: 86.39 🟢 ( previous job: 99.82, improvement: 86.54%)
  • whonix-workstation-17_socket-data-duplex: 167.60 🟢 ( previous job: 169.50, improvement: 98.88%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 408960.00 :small_red_triangle_up: ( previous job: 446963.00, degradation: 91.50%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 5995.00 :small_red_triangle_up: ( previous job: 6149.00, degradation: 97.50%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 4931.00 :green_circle: ( previous job: 4826.00, improvement: 102.18%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 450032.00 :green_circle: ( previous job: 382273.00, improvement: 117.73%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 430449.00 :small_red_triangle_up: ( previous job: 437636.00, degradation: 98.36%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 103640.00 :green_circle: ( previous job: 62195.00, improvement: 166.64%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 7785.00 :green_circle: ( previous job: 6479.00, improvement: 120.16%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7868.00 :green_circle: ( previous job: 7669.00, improvement: 102.59%)
  • fedora-41-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 399153.00 :green_circle: ( previous job: 368309.00, improvement: 108.37%)
  • fedora-41-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 351399.00 :green_circle: ( previous job: 318716.00, improvement: 110.25%)
  • fedora-41-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 83724.00 :green_circle: ( previous job: 82694.00, improvement: 101.25%)
  • fedora-41-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8015.00 :small_red_triangle_up: ( previous job: 8485.00, degradation: 94.46%)
  • fedora-41-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 1302.00 :green_circle: ( previous job: 542.00, improvement: 240.22%)
  • fedora-41-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 359101.00 :small_red_triangle_up: ( previous job: 373957.00, degradation: 96.03%)
  • fedora-41-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 90100.00 :green_circle: ( previous job: 61534.00, improvement: 146.42%)
  • fedora-41-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 91362.00 :green_circle: ( previous job: 80283.00, improvement: 113.80%)
  • fedora-41-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 2171.00 :small_red_triangle_up: ( previous job: 2215.00, degradation: 98.01%)
  • fedora-41-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8285.00 :green_circle: ( previous job: 7540.00, improvement: 109.88%)
  • fedora-41-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 1354.00 :green_circle: ( previous job: 1130.00, improvement: 119.82%)
  • fedora-41-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 397790.00 :green_circle: ( previous job: 369868.00, improvement: 107.55%)
  • fedora-41-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 176305.00 :small_red_triangle_up: ( previous job: 179949.00, degradation: 97.97%)
  • fedora-41-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 66835.00 :green_circle: ( previous job: 17567.00, improvement: 380.46%)
  • fedora-41-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 82607.00 :green_circle: ( previous job: 79021.00, improvement: 104.54%)
  • fedora-41-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8316.00 :green_circle: ( previous job: 7867.00, improvement: 105.71%)
  • fedora-41-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 2165.00 :green_circle: ( previous job: 1953.00, improvement: 110.86%)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants