Backport of #26781 with the follow-up fix from #27000 folded in.
On release branches, the ROS integration test now clones a matching
branch from Auterion/px4-ros2-interface-lib instead of always using
main, preventing build failures from uORB message divergence between
main and release branches.
Branch resolution uses GITHUB_BASE_REF on pull requests (the branch
the code is merging into) and falls back to GITHUB_REF_NAME for
direct pushes, so backport PRs resolve to their target release
branch instead of the PR author's branch.
Fixes https://github.com/Auterion/px4-ros2-interface-lib/issues/184
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
The current workflow_dispatch path builds whatever HEAD of the dispatch ref
is, labels the resulting image with px4_version, and publishes. That's
fine for rebuilding current state but it cannot rebuild the exact commit
a release tag points to, because the dispatch loads the workflow file
from one ref and implicitly checks out the same ref for the build.
This matters for release recovery. When the v1.17.0-rc2 tag push failed
to publish containers back on 2026-03-13 (the v1 GHA cache protocol
removal in RunsOn v2.12.0), the tag was not re-pushed, so the only way
to publish rc2 containers now is via workflow_dispatch. Without this
change, a dispatch against release/1.17 builds release/1.17 HEAD and
labels it v1.17.0-rc2, which produces a container whose contents do not
match the rc2 tag's actual code. That is not a faithful recovery.
Add a build_ref input that controls only the checkout ref, defaulting
to empty which falls back to github.ref (preserving current behavior
for both push events and dispatches that omit the input). With this,
a release recovery looks like:
gh workflow run dev_container.yml --repo PX4/PX4-Autopilot \
--ref release/1.17 \
-f px4_version=v1.17.0-rc2 \
-f build_ref=v1.17.0-rc2 \
-f deploy_to_registry=true
The workflow loads from release/1.17 HEAD (which has the cache fix
from 39b0568 and the hardening from d74db56a), but the build uses
Tools/setup/Dockerfile from the rc2 tag. The published image has
rc2 contents under the rc2 label, as if the original tag push had
worked.
All three actions/checkout steps (setup, build, deploy) take the same
ref expression so every job sees a consistent workspace. Non-dispatch
events (push, PR) evaluate github.event.inputs.build_ref to empty and
fall back to github.ref exactly as before.
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
(cherry picked from commit e4d46f20f4)
Three related fixes to prevent a repeat of the v1.17.0-rc2 incident, where a
post-push GHA cache-export 404 failed the arm64 build after both registry
pushes had already succeeded, fail-fast cancelled amd64, and the deploy job
was skipped, leaving the registries with only a partial arm64 publish and no
multi-arch manifest.
- Mark cache export as non-fatal via ignore-error=true on cache-to. A
successful registry push should never be undone by a cache-layer flake.
This alone would have let rc2 publish correctly.
- Decouple the deploy job from the build job's exit code. Change its if:
gate to !cancelled() + setup success only, and promote the existing
"Verify Images Exist Before Creating Manifest" step from a warning into
a hard precondition. Deploy now runs whenever both per-arch tags actually
exist in the registries, which is its real precondition, and fails loudly
if a tag is missing.
- Bump every action to the current major (runs-on/action v2,
actions/checkout v5, docker/login-action v4, docker/setup-buildx-action v4,
docker/build-push-action v7, docker/metadata-action v6). This gets the
workflow off Node 20 before GitHub's June 2 2026 forced runtime switch
and keeps runs-on/action on the same major as the runs-on platform.
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
(cherry picked from commit d74db56a06)
Backport of #26742 to release/1.17.
RunsOn v2.12.0 (March 6, 2026) removed v1 cache toolkit support,
causing the buildx GHA cache proxy to return 404 for v1 endpoints.
This broke the v1.17.0-rc2 container build.
Removing the explicit version=1 parameter lets buildkit auto-detect
the v2 protocol.
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
Remove the step that uploaded every version tag to the stable/ S3
directory, which caused QGC users selecting "stable" to receive
pre-release firmware (#26340). The stable/ and beta/ directories
are now controlled exclusively by their respective branch pushes,
while version tags only upload to their versioned archive directory
(e.g., v1.16.1/). Pre-release tags are also correctly marked on
GitHub Releases.
Co-authored-by: Julian Oes <julian@oes.ch>
Fixes#26340
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
* Tone down the performance of some runners from 8cpu+ down to 4cpu+
* Improve and document caching on PX4 builds with an improved ccache key strategy
* Review and document artifact upload logic for binaries uploaded to S3 and github
releases
* Future Improvement, introduce runners configuration file so we can
control more precesily which instances are allocated.
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
- Marked as stale if no activity for 90 days
- Closed if no activity for 30 days after being marked as stale
- Helpful stale and close messages
Signed-off-by: Ramon Roche <mrpollo@gmail.com>
Co-authored-by: Daniel Agar <daniel@agar.ca>
There was a build error with the previous container:
CMake Error at /usr/share/cmake-3.28/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Threads (missing: Threads_FOUND)
Call Stack (most recent call first):
/usr/share/cmake-3.28/Modules/FindPackageHandleStandardArgs.cmake:600 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.28/Modules/FindThreads.cmake:226 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
build/px4_sitl_test/_deps/abseil-cpp-src/CMakeLists.txt:98 (find_package)
See https://github.com/PX4/PX4-Autopilot/actions/runs/16235116238/job/45844179833
* ekf2: allow manual position reset when horizontal aiding is active
This allows the pilot to override position esitmates manually
* mavlink sim: add support of failure gps struck
* mavlink sim: add GNSS failure "wrong" type
* ekf2-gnss: add reset mode
This allows the user to choose whether the position should immediately
be reset to GNSS on fusion timeout or if the EKF can continue with
velocity dead-reckoning.
* ekf2: fix unit test changes due to GNSS start logic
Especially because the EKF doesn't need to reset the states if the test
ratio is already passing
* rename mode enum
* reset to gps lat lon on init
* remove obsolete reset-condition (handled in #25223)
* WIP try to upgrade compiler externally
---------
Co-authored-by: bresch <brescianimathieu@gmail.com>
Co-authored-by: Niklas Hauser <niklas@auterion.com>
From https://github.com/google/fuzztest.
This will now also add gtest (via cmake FetchContent)
And requires newer cmake (container updates):
CMake 3.19 or higher is required. You are running version 3.16.3