mirror of
https://github.com/bevyengine/bevy.git
synced 2026-05-06 06:06:42 -04:00
main
172 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
e626337b00 |
Use the shorthand functions to construct Val in examples (#24096)
This is my first bevy PR, please tell me if I'm doing anything wrong. # Objective Contribute to #22695. Showcase the preferred coding style in all examples. ## Solution Replace Val:: constructors with the more ergonomic shorthand functions. Change their float literals to integer literals if they are integral. Exceptions: - const contexts (the shorthand functions are not const) - inside bsn! macros (these are new and presumably know what they are doing) - in testbed (these are not really examples) - Val::ZERO (no helper function) ## Testing Ran the changed examples before and after, except the library example `widgets` where I just checked that it still builds. ## Context There was PR #22765 that fixed the same thing but only in the UI examples. --------- Co-authored-by: ickshonpe <david.curthoys@googlemail.com> |
||
|
|
edf38baaf6 |
headless_renderer example overallocates Buffer. (#24005)
# Objective headless_renderer example over allocates the Buffer `RenderDevice::align_copy_bytes_per_row` takes bytes per row, the example is passing pixels - then multiplying the padded result by bytes per pixel. So it allocates 8192 bytes per row instead of 7680. ## Solution Multiply pixels per row by bytes per pixel and pass that to `align_copy_bytes_per_row` ## Testing - Did you test these changes? If so, how? Ran the example with various pixel widths. The over allocation doesn't affect the rendered output because the other call to `align_copy_bytes_per_row` in `update` is correct, so we properly extract the image from the padded buffer. |
||
|
|
082c44ea4c |
Remove ExtractedView::hdr, add ExtractedView::texture_format, move compositing_space to ExtractedCamera (#23734)
# Objective - Clean up our texture format handling. - Fix #23732 - Get us closer to #22563 - It makes no sense for views to talk about being hdr or not. They have a texture format, that's it. What does HDR shadows even mean lol - Same for compositing_space ## Solution - Remove ExtractedView::hdr - Add ExtractedView::texture_format - Move ExtractedView::compositing_space to ExtractedCamera::compositing_space - Add texture_format to a bunch of specialization keys instead of hdr bool - Convert VolumetricFogPipelineKey to not use flags and just use bool and texture format - Remove BevyDefault TextureFormat - Remove ViewTarget TEXTURE_FORMAT_HDR ## Testing - Pretty extensively test at this point This has a migration guide. --------- Co-authored-by: Willow Black <wmcblack@gmail.com> Co-authored-by: Máté Homolya <mate.homolya@gmail.com> Co-authored-by: Luo Zhihao <luo_zhihao@outlook.com> Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
|
|
4b09a461e9 |
Simplify Command error handling traits (#23432)
# Objective
I have working `Command` reflection for my game:
```rust
#[derive(Clone)]
pub struct ReflectCommand {
pub apply: fn(&mut World, &dyn PartialReflect, &TypeRegistry),
}
impl ReflectCommand {
pub fn apply(&self, world: &mut World, command: &dyn PartialReflect, registry: &TypeRegistry) {
(self.apply)(world, command, registry);
}
}
impl<C: Command<Result> + Reflect + TypePath> FromType<C> for ReflectCommand {
fn from_type() -> Self {
ReflectCommand {
apply: |world, command, registry| {
let command = from_reflect_with_fallback::<C>(command, world, registry);
command.apply(world);
},
}
}
}
```
However, I am currently only allowed to support a single output type
across *all* command types (which I've chosen to be `Result` for the
time being). This is because, by virtue of `Out` being a generic
parameter, `Command` *can* be implemented multiple times for the same
type, but with different output types. In order for my command
reflection logic to support command types with *any* output type, I need
the ability to guarantee that `Command` will only be implemented once
for some `FooCommand`.
That's why `Out` should be changed into an associated type.
## Solution
- Turned the `Out` generic parameter into an associated type on both
`Command` and `EntityCommand`.
- Bounded `Command::Out` associated type with a new `CommandOutput`
trait.
- This replaces the functionality of the now removed `HandleError`
trait, and allows us to add its functions directly on the `Command`
trait.
- Also bounded `EntityCommand::Out` associated type with the new
`CommandOutput` trait.
- This replaces the functionality of the now removed `CommandWithEntity`
trait, and allows us to add its functions directly on the
`EntityCommand` trait.
Additionally, the new `CommandOutput` trait gives a place for bevy users
to hook into error handling logic with their own types! It also comes
with `diagnostic::on_unimplemented` diagnostics!
## Testing
Maybe TODO: Current tests appear green but should we add tests for the
new `CommandOutput` trait?
|
||
|
|
846b196cda |
Deploy Docs - Build Docs step: Resolve most Warnings (#23343)
# Objective - Resolve most of the warnings in the Build Docs step of Deploy Docs, you can see them here: https://github.com/bevyengine/bevy/actions/runs/23021953246/job/66860356132 ## Solution - Resolve most of the warnings - The `doc_cfg` feature should only be enabled with `docsrs`, not `docsrs_dep` (I just followed this pattern from other crates tbh like `bevy_math` and `bevy_material`) - I unlinked example docs that references non public items within the example itself - I corrected some links Note: I didn’t fix the warnings concerning the macros in bevy-reflect for `tuple.rs` because I’m not macro savvy. If someone knows what to do in those cases (should I just remove the `$(#[$meta])*` lines cause they’re not in use?), just let me know and I can do it (or you can open a pull!) --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
|
|
53050f90e2 |
New bevy_settings crate (#23034)
Yet another attempt at implementing bevy preferences. This version uses bevy_reflect serialization to convert resources from toml values into Rust types and vice versa. This is based on the feedback that I got from the earlier attempt in #22770 To indicate that a resource type should be loaded as preferences, you'll need to add the `SettingsGroup` annotation: ```rust #[derive(Resource, SettingsGroup, Reflect, Default)] #[reflect(Resource, SettingsGroup, Default)] struct Counter { count: i32, } ``` This will produce a TOML file that looks like this: ```toml [counter] count = 3 ``` ## Theory of Operation The `PreferencesPlugin` scans the type registry for all resource types that impl `SettingsGroup` and `Default`. Derive attributes can be used to write the resource to a different file (or different key in browser local storage). `PreferencesPlugin` should be added before other plugins. This ensures that any other plugins can have access to the settings data during initialization. The loader checks to see if the resource already exists; if so, it uses that resource instance and patches the toml values into it, preserving any defaults that have been set. If the resource does not exist, it constructs a new one via `ReflectDefault` before applying the toml properties. (There was a suggestion of using `FromWorld` instead of `Default`. This is worth considering, although there may be issues with calling `FromWorld` so early in the app initialization lifecycle, before most resources have been created.) On `wasm` platforms, this uses browser local storage rather than the filesystem to store preferences. On platforms which have neither, preferences are not supported (although it's possible that some platform-specific settings storage could be implemented). ## Note on terminology I've tried to consistently use the term "preferences" rather than "settings" or "config" because those are broader terms. For example, the `xorg.conf` file, commonly used to configure an XWindows display, is technically a "settings" file, but it is not "preferences". However, for end users it's perfectly permissible to use the word "Settings" in menus and navigation elements since that is the term most commonly used in software today. ## Open Issues ### Syncing with non-resources Some important settings are not stored in resources: one of the most common things that users will want to preserve is the window position and size, which exist on the window entity. It's not possible, under my design, to store arbitrary entities as preferences, so in order for the window properties to be saved they will have to be copied to a resource before being serialized. We probably don't want to be continually copying the window size every time the window is dragged or moved, so we'll need some way to know when serialization is about to happen. I'm thinking that possibly some global event could be triggered just before serialization, and the handlers could use this event to make last-minute patches to resources. ## Saving If Changed Because saving involves i/o, we want to only save when preferences have actually changed. This involves two discrete checks: * Whether a save operation needs to be done at all * Which files need to be saved The reason for these two steps is that even checking which files need to be saved is non-trivial and probably should not be done every frame. Rather than check the `is_changed()` field of every preference resource every frame, the code currently relies on the user to issue an explicit `Command` whenever they change a preferences property. This gets especially tricky if the settings to be saved aren't actually in a resource, like the aforementioned window position. There are two forms of the command: `SavePreferences` and `SavePreferencesSync`. The former, which uses an i/o task, is the preferred approach, unless the app is about to exit, in which case the sync version is preferred. Once we know that a save will take place, a second pass can be used to check the timestamp on every resource: if any resource has a tick value later than the last time the file was either loaded or saved, then we know that file is out of date. Also some properties can change at high frequency - for example, dragging the master volume slider changes the volume every frame. For this reason, we will generally want to put in a delay / debounce logic to batch updated together (not present in this PR). However, this delay means that if the user adjusts the setting and then immediately terminates the app, the setting won't be recorded. (There is no chance of the file being corrupted, as it uses standard practices for ensuring file integrity.) Unfortunately, on some platforms, depending on how the user chooses to quit (Command-Q on Mac) there's no opportunity to listen for the `AppExit` event. For this reason, it's best to use a "belt and suspenders" approach which listens for both `AppExit` and autosave timer events. Fixes #23172 Fixes #13311 --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Kevin Chen <chen.kevin.f@gmail.com> |
||
|
|
59e9ee3a1a |
Store Resources as components on singleton entities (#20934)
This is part of #19731. # Resources as Components ## Motivation More things should be entities. This simplifies the API, the lower-level implementation and the tools we have for entities and components can be used for other things in the engine. In particular, for resources, it is really handy to have observers, which we currently don't have. See #20821 under 1A, for a more specific use. ## Current Work This removes the `resources` field from the world storage and instead store the resources on singleton entities. For easy lookup, we add a `HashMap<ComponentId, Entity>` to `World`, in order to quickly find the singleton entity where the resource is stored. Because we store resources on entities, we derive `Component` alongside `Resource`, this means that ```rust #[derive(Resource)] struct Foo; ``` turns into ```rust #[derive(Resource, Component)] struct Foo; ``` This was also done for reflections, meaning that ```rust #[derive(Resource, Reflect)] #[refect(Resource)] struct Bar; ``` becomes ```rust #[derive(Resource, Component, Reflect)] #[refect(Resource, Component)] struct Bar; ``` In order to distinguish resource entities, they are tagged with the `IsResource` component. Additionally, to ensure that they aren't queried by accident, they are also tagged as being internal entities, which means that they don't show up in queries by default. ## Drawbacks - Currently you can't have a struct that is both a `Resource` and a `Component`, because `Resource` expands to also implement `Component`, this means that this throws a compiler error as it's implemented twice. - Because every reflected Resource must also implement `ReflectComponent` you need to import `bevy_ecs::reflect::ReflectComponent` every time you use `#[reflect(Resource)]`. This is kind of unintuitive. ## Future Work - Simplify `Access` in the ECS, to only deal with components (and not components *and* resources). - Newtype `Res<Resource>` to `Single<Ref<Resource>>` (or something similair). - Eliminate `ReflectResource`. - Take stabs at simplifying the public facing API. --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Dimitrios Loukadakis <dloukadakis@users.noreply.github.com> |
||
|
|
9fd2637846 |
Add tools to avoid unnecessary AssetEvent::Modified events that lead to rendering performance costs (#16751) (#22460)
# Objective - Fixes #16751 ## Solution - `Assets::get_mut` now returns a wrapper `AssetMut` type instead of `&mut impl Asset`. - `AssetMut` implements `Deref` and `DerefMut`. - `DerefMut` marks assets as changed. - when dropped `AssetMut` will add `AssetEvent::Modified` event to a queue only in case asset was marked as changed. ## Testing - Did you test these changes? If so, how? - No unit tests were added, change is pretty straightforward. - Test project: https://github.com/MatrixDev/bevy-feature-16751-test. - With change: ~100 fps. - Without change: ~15 fps. - Are there any parts that need more testing? - I don't really see how this can break anything or add a measurable overhead. - `AssetEvent::Modified` will now be sent after the asset was modified instead of before. It should not affect anything but still worth noting. - How can other people (reviewers) test your changes? Is there anything specific they need to know? - Have a big amount of entities that constantly update their materials. - Properties of those materials should be animated in a stepped maned (like changing color every 0.1 seconds). - Update material only if value has actually changed: ```rust if material.base_color != new_color { material.base_color = new_color; } ``` - If relevant, what platforms did you test these changes on, and are there any important ones you can't test? - tested on macos (Mackbook M1) - not a platform-specific issue PS: This is my first PR, so please don’t judge. |
||
|
|
71dd9ea7db |
Render Recovery (#22761)
# Objective - Recover from rendering errors. - Another step towards render recovery after #22714 #22759 and #16481 ## Solution - Use `wgpu::Device::set_device_lost_callback` and `wgpu::Device::on_uncaptured_error` to listen for errors. - Add a state machine for the renderer - Update it on error - Add a `RenderErrorHandler` to let users specify behavior on error by returning a specific `RenderErrorPolicy` - This lets us for example ignore validation errors, delete responsible entities, or reload the renderer if the device was lost. ## Testing - #22757 with any of ```rs .insert_resource(bevy_render::error_handler::RenderErrorHandler(|_, _, _| { bevy_render::error_handler::RenderErrorPolicy::StopRendering })) ``` ```rs .insert_resource(bevy_render::error_handler::RenderErrorHandler(|_, _, _| { bevy_render::error_handler::RenderErrorPolicy::Recover(default()) })) ``` Note: no release note yet, as recovery does not exactly work well: this PR gets us to the point of being able to care about it, but we currently instantly crash on recover due to gpu resources not existing anymore. We need to build more resilience before publicizing imo. --------- Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> |
||
|
|
c7fa06672b |
warn about risks of crashing the renderer (#22808)
# Objective - The new render_recovery example can crash so hard that it's needed to restart to fully recover ## Solution - Warn users about it |
||
|
|
7b0d0a4e0f |
Add example for triggering and recovering from rendering errors (#22757)
# Objective - Test suite/demonstration for render recovery ## Solution - Add an example that has ways to trigger various rendering errors ## Testing - run it Note: im opening this separately so that behavior on bevy main can be compared to upcoming render recovery pr. It doesn't have to merge first though. |
||
|
|
f1f41547fc |
Replace RenderGraph with systems (#22144)
# render-graph-as-systems > [!NOTE] > Remember to check hide whitespace in diff view options when reviewing this PR This PR removes the `RenderGraph` in favor of using systems. ## Motivation The `RenderGraph` API was originally created when the ECS was significantly more immature. It was also created with the intention of supporting an input/output based slot system for managing resources that has never been used. While resource management is an important potential use of a render graph, current rendering code doesn't make use of any patterns relating to it. Since the ECS has improved, the functionality of `Schedule` has basically become co-extensive with what the `RenderGraph` API is doing, i.e. ordering bits of system-like logic relative to one another and executing them in a big chunk. Additionally, while there's still desire for more advanced techniques like resource management in the graph, it's desirable to implement those in ECS terms rather than creating more `RenderGraph` specific abstraction. In short, this sets us up to iterate on a more ECS based approach, while deleting ~3k lines of mostly unused code. ## Implementation At a high level: We use `Schedule` as our "sub-graph." Rather than running the graph, we run a schedule. Systems can be ordered relative to one another. The render system uses a `RenderGraph` schedule to define the "root" of the graph. `core_pipeline` adds a `camera_driver` system that runs the per-camera schedules. This top level schedule provides an extension point for apps that may want to do custom rendering, or non-camera rendering. ### `CurrentView` / `ViewQuery` When running schedules per-camera in the `camera_driver` system, we insert a `CurrentView` resource that's used to mark the currently iterating view. We also add a new param `ViewQuery` that internally uses this resource to execute the query and skip the system if it doesn't match as a convenience. ### `RenderContext` The `RenderContext` is now a system param that wraps a `Deferred` for tracking the state of the current command encoder and queued buffers. ### `SystemBuffer` We use an system buffer impl to track command encoders in the render context and rely on apply deferred in order to encode them all. Currently, this encodes them in series. There are likely opportunities here to make this more efficient. ## Benchmarks ### Bistro <img width="1635" height="825" alt="Screenshot 2026-01-15 at 7 57 40 PM" src="https://github.com/user-attachments/assets/8e55a959-89a3-4947-bfc5-c04780f82e7b" /> ### Caldera <img width="1631" height="828" alt="Screenshot 2026-01-15 at 8 13 06 PM" src="https://github.com/user-attachments/assets/e7e8ae0d-41c3-430f-8b4d-9099b3d922a0" /> ## Future steps There are a number of exciting potential changes that could follow here: - We can explore adding something like a read-only schedule to pick up some more potential parallelism in graph execution. - We can use more things like run conditions in order to prevent systems from running at all in the first place. - We can explore things like automating resource creation via system params. ## TODO: - [x] Make sure 100% of everything still works. - [x] Benchmark to make sure we don't regress performance - [x] Re-add docs --------- Co-authored-by: atlas dostal <rodol@rivalrebels.com> |
||
|
|
27fd2c4f78 |
Reuse gpu textures when possible (#22552)
# Objective when an image's data is modified, we currently always create a new gpu texture. if the size and format haven't changed, we could instead reuse the existing texture. this is more efficient for cpu image data modifications, and also means we don't need to `get_mut` materials using these textures to propagate the changes, unless the descriptor has changed. ## Solution - put the full texture descriptor and view descriptor into the GpuImage so we can compare fully - reuse the existing texture and view when the descriptors are unchanged |
||
|
|
ca2073e430 |
Add externally driven rendering example (#22551)
# Objective
- Show how to drive loop for rendering externally.
## Solution
- Add example.
## Testing
Produces:
<img width="410" height="187"
alt="{EFA87DC0-0054-4EB4-8863-9DE13CC492D5}"
src="https://github.com/user-attachments/assets/a9376e44-af22-4d98-80cc-02e14f0914d8"
/>
|
||
|
|
a88af65738 |
Contact Shadows (#22382)
# Objective - Implement contact shadows to add fine shadow detail where shadow cascades cannot. ## Solution - Extend our existing pbr implementation using our existing raymarching functions. --- ## Showcase <img width="1824" height="1180" alt="image" src="https://github.com/user-attachments/assets/e93b79c5-c596-4a9e-b94d-20bdde1d863b" /> <img width="1824" height="1180" alt="image" src="https://github.com/user-attachments/assets/0fd7dffa-60b8-4b92-8fad-7f993d4d89dd" /> https://github.com/user-attachments/assets/e74b190d-9ae3-4aaf-97f0-b520930a0667 https://github.com/user-attachments/assets/e80ccb26-bbaa-4d25-a823-8ea12354c5b9 https://github.com/user-attachments/assets/b04f4b00-92bd-4a2f-b7dd-5157d8fbe0ab <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/b7629908-dd32-48db-8ee7-a4d2dd8f66c2" /> <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/3de0258e-9191-4180-ac57-41b32e1205bd" /> <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/951477f9-e9a9-426f-ae8d-18ae50cc7b85" /> <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/2291453c-da57-4fcc-a6b0-f60f6eac6cbb" /> <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/5820cdff-ea54-4294-b520-2a8d8dc24996" /> <img width="1073" height="685" alt="image" src="https://github.com/user-attachments/assets/3ea16481-7689-4e99-87e2-1589f1532e4c" /> --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: charlotte 🌸 <charlotte.c.mcelwain@gmail.com> |
||
|
|
2e9ef6988c |
Fix non-srgb RenderTarget::Image (#22090)
# Objective Fixes https://github.com/bevyengine/bevy/pull/22031#issuecomment-3640036590. Fixes #15201. #22031 makes `ViewTarget::out_texture_format` to return the underlying texture’s format, which causes some issues: 1. `ExtractedWindow::swap_chain_texture_view` always uses srgb view. But the underlying swap chain texture in WebGPU can be Bgra8unorm, leading to format mismatch between the render pipeline and the render pass. 2. We can no longer use srgb view for non-srgb target texture, it will panic due to incompatible pipeline: ```rs let mut image = Image::new_target_texture(512, 512, TextureFormat::Rgba8Unorm); image.texture_view_descriptor = Some(bevy_render::render_resource::TextureViewDescriptor { format: Some(TextureFormat::Rgba8UnormSrgb), ..Default::default() }); image.texture_descriptor.view_formats = &[TextureFormat::Rgba8UnormSrgb]; ``` ## Solution Reverts #22031. Renames some `format` to `view_format` explicitly. Adds `view_format` to `GpuImage` and `Image::new_target_texture` so we can make render pipeline match render pass texture view. ## Testing Tested `render_to_texture` and `screenshot` examples on linux and webgpu. <details> <summary>The rendered Rgba8Unorm texture with or without srgb view in MeshMaterial3d looks the same, but the underlaying data is different:</summary> With srgb view: <img width="912" height="640" alt="屏幕截图_20251212_223355" src="https://github.com/user-attachments/assets/d320bad9-d11a-4d3d-93a9-879af6413658" /> Without srgb view: <img width="912" height="640" alt="屏幕截图_20251212_223313" src="https://github.com/user-attachments/assets/522abf23-9c85-468d-8d17-a94495ee4452" /> </details> |
||
|
|
67633b34da |
Convert RenderTarget to Component (#20917)
# Objective #20830 created the possibility that we may want to have render targets that produce a number of outputs, e.g. depth and normals. This is a first step towards something like that (e.g. a `RendersTo` relation) by converting `RenderTarget` to be a component. This is also useful for out-of-tree render targets that may want to do something like `#[require(RenderTarget::Image)]` once BSN lands. ## Solution Make it a component. |
||
|
|
0c815db70e |
Upgrade to wgpu 27 and associated crates (#21746)
Supersedes https://github.com/bevyengine/bevy/pull/21725. --------- Co-authored-by: Jasmine Schweitzer <jasmine.schweitzer@nominal.io> Co-authored-by: Ben Cochrane <ben.cochrane2112@gmail.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: François Mockers <mockersf@gmail.com> |
||
|
|
3683903744 |
Implement pretty component logging (#21587)
# Objective `log_components` looks like this: <img width="1609" height="266" alt="image" src="https://github.com/user-attachments/assets/6e4e5173-9f9e-4cf6-8ed1-a64d4b991a8a" /> eek! what is going on??? I could pretty-print this by fiddling with the logging filters, but that - is weird boilerplate, just arcane enough that I would not be able to do it without looking it up or copy-pasting it from somewhere and - would change ALL debug prints to use newlines, which would flood my terminal. Can I just pretty print the component log plz? ## Solution Add `log_components_pretty`: <img width="1480" height="125" alt="image" src="https://github.com/user-attachments/assets/ffecc45c-8bb9-4156-8aeb-623c3db65900" /> much better! - The name and the entity ID of the logged entity - All component names without the `DebugName` wrapper - Alphanumerically sorted components And we can go one step further when downstream (thanks Chris!) ```rust DefaultPlugins.set(bevy::log::LogPlugin { fmt_layer: |_| { Some(Box::new( bevy::log::tracing_subscriber::fmt::Layer::default() .map_fmt_fields(|f| f.debug_alt()) .with_writer(std::io::stderr), )) }, ..default() }) ``` which gives us newlines: <img width="939" height="339" alt="image" src="https://github.com/user-attachments/assets/ed4c327e-12ab-46cd-b698-1d911a2d762e" /> ## Additional info Added a `debug` feature because the experience of getting blasted with 30 strings telling me to enable a feature is suboptimal. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
|
|
4d74baf1ae |
BufferedEvent -> Message Rename (#20953)
This renames the concept of `BufferedEvent` to `Message`, and updates our APIs, comments, and documentation to refer to these types as "messages" instead of "events". It also removes/updates anything that considers messages to be "observable", "listenable", or "triggerable". This is a followup to https://github.com/bevyengine/bevy/pull/20731, which omitted the `BufferedEvent -> Message` rename for brevity. See that post for rationale. |
||
|
|
13877fa84d |
Add a new trait to accept more types in the Val-helper functions (#20551)
# Objective - Allow the `Val`-helper functions to accept more types besides just `f32` Fixes #20549 ## Solution - Adds a new trait that can be implemented for numbers - That trait has a method that converts `self` to `f32` ## Testing - I tested it using Rust's testing framework (although I didn't leave the tests in, as I don't deem them important enough) <details> <summary>Rust test</summary> ```rust #[cfg(test)] mod tests { use super::*; #[test] fn test_val_helpers_work() { let p = px(10_u8); assert_eq!(p, Val::Px(10.0)); let p = px(10_u16); assert_eq!(p, Val::Px(10.0)); let p = px(10_u32); assert_eq!(p, Val::Px(10.0)); let p = px(10_u64); assert_eq!(p, Val::Px(10.0)); let p = px(10_u128); assert_eq!(p, Val::Px(10.0)); let p = px(10_i8); assert_eq!(p, Val::Px(10.0)); let p = px(10_i16); assert_eq!(p, Val::Px(10.0)); let p = px(10_i32); assert_eq!(p, Val::Px(10.0)); let p = px(10_i64); assert_eq!(p, Val::Px(10.0)); let p = px(10_i128); assert_eq!(p, Val::Px(10.0)); let p = px(10.3_f32); assert_eq!(p, Val::Px(10.3)); let p = px(10.6_f64); assert_eq!(p, Val::Px(10.6)); } } ``` </details> --- ## Showcase ```rust // Same as Val::Px(10.) px(10); px(10_u8); px(10.0); ``` |
||
|
|
79c1f7c5e4 |
remove panic in TextureFormatPixelInfo::pixel_info (#20574)
# Objective fixes #20365 ## Solution TextureFormatPixelInfo::pixel_info now return a Result instead of panic maybe a custom error specific to this case is needed, to not have the other variants of `TextureAcessError` ## Testing ran my game on it, using mesh, sprite, ui everything seems good. **BUT** its my first time contributing this type of code, i dont know much about rendering and its integration in bevy. **REVIEW THIS LIKE I DON'T KNOW ANYTHING**. especially i dont know if there is some unwanted consequence of these changes in other places. what remains, is maybe we should add some `warn!` ? at least in picking. Currently in sprite picking backend if the pixel data couldn't be accessed (to determine if the alpha is greater than the threshold), the entity is skipped. maybe a `warn!` here at least ? (imo it shouldn't be skipped, instead be taken as if it was valid: id rather have a sprite where the whole size of it is picked than having it not working at all. maybe a SpritePickingSettings per entity ?) |
||
|
|
cb34db6e98 |
Fix latest lints for rust beta (#20516)
# Objective - Fix #19679 ## Solution - fix lints |
||
|
|
03dd839b82 |
Use bevy::camera in examples instead of bevy::render::camera re-export (#20477)
# Objective - Prepare for removing re-exports ## Solution - title ## Testing - cargo check --examples |
||
|
|
1b6e3f53fd |
Add Image constructor specialised for rendering to a texture (#17209)
# Objective Fixes #7358 Redo of #7360 Ergonomics. There's a bunch of enigmatic boilerplate for constructing a texture for rendering to, which could be greatly simplified for the external user-facing API. ## Solution - Take part of the render_to_target example and turn it into a new constructor for `Image`, with minimal changes beyond the `Default` implementation. - Update the render_to_target example to use the new API. Strictly speaking, there are two small differences between the constructor and the example: ~~1. The example sets the `size` when initially constructing the `Image`, then `resize`s, but `resize` sets the `size` anyway so we don't need to do this extra step.~~ ~~2. The example sets `Image.texture_descriptor.format` to `TextureFormat::Bgra8UnormSrgb`, but the default impl sets this to `TextureFormat::Rgba8UnormSrgb` via `wgpu::TextureFormat::bevy_default()`. I don't know what sort of impact this has, but it works on my machine.~~ I've deliberately chosen to only include `width` and `height` as parameters, but maybe it makes sense for some of the other properties to be exposed as parameters. --- ## Changelog ### Added Added `Image::new_target_texture` constructor for simpler creation of render target textures. --- Notes: - This is a re-do of https://github.com/bevyengine/bevy/pull/7360 - there's some relevant discussion on code style there. - The docs for the method want to refer to `bevy_render::camera::Camera` and `bevy_render::camera::RenderTarget::Image`. `bevy_image` used to be part of `bevy_render` and was split out in the past, and `bevy_image` doesn't depend on `bevy_render`. What's the recommendation here? --------- Co-authored-by: Antony <antony.m.3012@gmail.com> |
||
|
|
4e9e78c31e |
Split BufferedEvent from Event (#20101)
# Objective > I think we should axe the shared `Event` trait entirely It doesn't serve any functional purpose, and I don't think it's useful pedagogically @alice-i-cecile on discord ## Solution - Remove `Event` as a supertrait of `BufferedEvent` - Remove any `Event` derives that were made unnecessary - Update release notes --------- Co-authored-by: SpecificProtagonist <vincentjunge@posteo.net> |
||
|
|
70902413b2 |
Update log_layers_ecs example for children macro (#18293)
# Objective Contributes to #18238 Updates the `log_layers_ecs`, example to use the `children!` macro. Note that I did not use a macro, nor `Children::spawn` for the outer layer. Since the `EventReader` is borrowed mutably, any `.map` I did on `events.read()` was going to have the reference outlive the function body. I believe this scope of change is correct for the PR. ## Solution Updates examples to use the Improved Spawning API merged in https://github.com/bevyengine/bevy/pull/17521 ## Testing - Did you test these changes? If so, how? - Opened the examples before and after and verified the same behavior was observed. I did this on Ubuntu 24.04.2 LTS using `--features wayland`. - Are there any parts that need more testing? - Other OS's and features can't hurt, but this is such a small change it shouldn't be a problem. - How can other people (reviewers) test your changes? Is there anything specific they need to know? - Run the examples yourself with and without these changes. - If relevant, what platforms did you test these changes on, and are there any important ones you can't test? - see above --- ## Showcase n/a ## Migration Guide n/a Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
|
|
96dcbc5f8c |
Ugrade to wgpu version 25.0 (#19563)
# Objective Upgrade to `wgpu` version `25.0`. Depends on https://github.com/bevyengine/naga_oil/pull/121 ## Solution ### Problem The biggest issue we face upgrading is the following requirement: > To facilitate this change, there was an additional validation rule put in place: if there is a binding array in a bind group, you may not use dynamic offset buffers or uniform buffers in that bind group. This requirement comes from vulkan rules on UpdateAfterBind descriptors. This is a major difficulty for us, as there are a number of binding arrays that are used in the view bind group. Note, this requirement does not affect merely uniform buffors that use dynamic offset but the use of *any* uniform in a bind group that also has a binding array. ### Attempted fixes The easiest fix would be to change uniforms to be storage buffers whenever binding arrays are in use: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED @group(0) @binding(0) var<uniform> view: View; @group(0) @binding(1) var<uniform> lights: types::Lights; #else @group(0) @binding(0) var<storage> view: array<View>; @group(0) @binding(1) var<storage> lights: array<types::Lights>; #endif ``` This requires passing the view index to the shader so that we know where to index into the buffer: ```wgsl struct PushConstants { view_index: u32, } var<push_constant> push_constants: PushConstants; ``` Using push constants is no problem because binding arrays are only usable on native anyway. However, this greatly complicates the ability to access `view` in shaders. For example: ```wgsl #ifdef BINDING_ARRAYS_ARE_USED mesh_view_bindings::view.view_from_world[0].z #else mesh_view_bindings::view[mesh_view_bindings::view_index].view_from_world[0].z #endif ``` Using this approach would work but would have the effect of polluting our shaders with ifdef spam basically *everywhere*. Why not use a function? Unfortunately, the following is not valid wgsl as it returns a binding directly from a function in the uniform path. ```wgsl fn get_view() -> View { #if BINDING_ARRAYS_ARE_USED let view_index = push_constants.view_index; let view = views[view_index]; #endif return view; } ``` This also poses problems for things like lights where we want to return a ptr to the light data. Returning ptrs from wgsl functions isn't allowed even if both bindings were buffers. The next attempt was to simply use indexed buffers everywhere, in both the binding array and non binding array path. This would be viable if push constants were available everywhere to pass the view index, but unfortunately they are not available on webgpu. This means either passing the view index in a storage buffer (not ideal for such a small amount of state) or using push constants sometimes and uniform buffers only on webgpu. However, this kind of conditional layout infects absolutely everything. Even if we were to accept just using storage buffer for the view index, there's also the additional problem that some dynamic offsets aren't actually per-view but per-use of a setting on a camera, which would require passing that uniform data on *every* camera regardless of whether that rendering feature is being used, which is also gross. As such, although it's gross, the simplest solution just to bump binding arrays into `@group(1)` and all other bindings up one bind group. This should still bring us under the device limit of 4 for most users. ### Next steps / looking towards the future I'd like to avoid needing split our view bind group into multiple parts. In the future, if `wgpu` were to add `@builtin(draw_index)`, we could build a list of draw state in gpu processing and avoid the need for any kind of state change at all (see https://github.com/gfx-rs/wgpu/issues/6823). This would also provide significantly more flexibility to handle things like offsets into other arrays that may not be per-view. ### Testing Tested a number of examples, there are probably more that are still broken. --------- Co-authored-by: François Mockers <mockersf@gmail.com> Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com> |
||
|
|
7645ce91ed |
Add newlines before impl blocks (#19746)
# Objective Fix https://github.com/bevyengine/bevy/issues/19617 # Solution Add newlines before all impl blocks. I suspect that at least some of these will be objectionable! If there's a desired Bevy style for this then I'll update the PR. If not then we can just close it - it's the work of a single find and replace. |
||
|
|
1079b83af9 |
Revert "bevy_log: refactor how log layers are wired together (#19248)" (#19705)
This reverts commit
|
||
|
|
8661e914a5 |
bevy_log: refactor how log layers are wired together (#19248)
# Objective Current way to wire `Layer`s together using `layer.with(new_layer)` in the `bevy_log` plugin is brittle and not flexible. As #17722 demonstrated, the current solution makes it very hard to do any kind of advanced wiring, as the type system of `tracing::Subscriber` gets in the way very quickly (the type of each new layer depends on the type of the previous ones). We want to make it easier to have more complex wiring of `Layers`. It would be hard to solve #19085 without it ## Solution It aims to be functionally equivalent. - Replace of using `layer.with(new_layer)` . We now add `layer.boxed()` to a `Vec<BoxedLayer>`. It is a solution recommended by `tracing_subscriber::Layer` for complex wiring cases (See https://docs.rs/tracing-subscriber/latest/tracing_subscriber/layer/index.html#runtime-configuration-with-layers) - Do some refactoring and clean up that is now enabled by the new solution ## Testing - Ran CI locally on Linux - Ran the logs examples - Need people familiar with the features `trace`, `tracing-chrome`, `tracing-tracy` to check that it still works as expected - Need people with access to `ios`, `android` and `wasm` to check it as well. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> |
||
|
|
38c3423693 |
Event Split: Event, EntityEvent, and BufferedEvent (#19647)
# Objective Closes #19564. The current `Event` trait looks like this: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The `Event` trait is used by both buffered events (`EventReader`/`EventWriter`) and observer events. If they are observer events, they can optionally be targeted at specific `Entity`s or `ComponentId`s, and can even be propagated to other entities. However, there has long been a desire to split the trait semantically for a variety of reasons, see #14843, #14272, and #16031 for discussion. Some reasons include: - It's very uncommon to use a single event type as both a buffered event and targeted observer event. They are used differently and tend to have distinct semantics. - A common footgun is using buffered events with observers or event readers with observer events, as there is no type-level error that prevents this kind of misuse. - #19440 made `Trigger::target` return an `Option<Entity>`. This *seriously* hurts ergonomics for the general case of entity observers, as you need to `.unwrap()` each time. If we could statically determine whether the event is expected to have an entity target, this would be unnecessary. There's really two main ways that we can categorize events: push vs. pull (i.e. "observer event" vs. "buffered event") and global vs. targeted: | | Push | Pull | | ------------ | --------------- | --------------------------- | | **Global** | Global observer | `EventReader`/`EventWriter` | | **Targeted** | Entity observer | - | There are many ways to approach this, each with their tradeoffs. Ultimately, we kind of want to split events both ways: - A type-level distinction between observer events and buffered events, to prevent people from using the wrong kind of event in APIs - A statically designated entity target for observer events to avoid accidentally using untargeted events for targeted APIs This PR achieves these goals by splitting event traits into `Event`, `EntityEvent`, and `BufferedEvent`, with `Event` being the shared trait implemented by all events. ## `Event`, `EntityEvent`, and `BufferedEvent` `Event` is now a very simple trait shared by all events. ```rust pub trait Event: Send + Sync + 'static { // Required for observer APIs fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` You can call `trigger` for *any* event, and use a global observer for listening to the event. ```rust #[derive(Event)] struct Speak { message: String, } // ... app.add_observer(|trigger: On<Speak>| { println!("{}", trigger.message); }); // ... commands.trigger(Speak { message: "Y'all like these reworked events?".to_string(), }); ``` To allow an event to be targeted at entities and even propagated further, you can additionally implement the `EntityEvent` trait: ```rust pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This lets you call `trigger_targets`, and to use targeted observer APIs like `EntityCommands::observe`: ```rust #[derive(Event, EntityEvent)] #[entity_event(traversal = &'static ChildOf, auto_propagate)] struct Damage { amount: f32, } // ... let enemy = commands.spawn((Enemy, Health(100.0))).id(); // Spawn some armor as a child of the enemy entity. // When the armor takes damage, it will bubble the event up to the enemy. let armor_piece = commands .spawn((ArmorPiece, Health(25.0), ChildOf(enemy))) .observe(|trigger: On<Damage>, mut query: Query<&mut Health>| { // Note: `On::target` only exists because this is an `EntityEvent`. let mut health = query.get(trigger.target()).unwrap(); health.0 -= trigger.amount(); }); commands.trigger_targets(Damage { amount: 10.0 }, armor_piece); ``` > [!NOTE] > You *can* still also trigger an `EntityEvent` without targets using `trigger`. We probably *could* make this an either-or thing, but I'm not sure that's actually desirable. To allow an event to be used with the buffered API, you can implement `BufferedEvent`: ```rust pub trait BufferedEvent: Event {} ``` The event can then be used with `EventReader`/`EventWriter`: ```rust #[derive(Event, BufferedEvent)] struct Message(String); fn write_hello(mut writer: EventWriter<Message>) { writer.write(Message("I hope these examples are alright".to_string())); } fn read_messages(mut reader: EventReader<Message>) { // Process all buffered events of type `Message`. for Message(message) in reader.read() { println!("{message}"); } } ``` In summary: - Need a basic event you can trigger and observe? Derive `Event`! - Need the event to be targeted at an entity? Derive `EntityEvent`! - Need the event to be buffered and support the `EventReader`/`EventWriter` API? Derive `BufferedEvent`! ## Alternatives I'll now cover some of the alternative approaches I have considered and briefly explored. I made this section collapsible since it ended up being quite long :P <details> <summary>Expand this to see alternatives</summary> ### 1. Unified `Event` Trait One option is not to have *three* separate traits (`Event`, `EntityEvent`, `BufferedEvent`), and to instead just use associated constants on `Event` to determine whether an event supports targeting and buffering or not: ```rust pub trait Event: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; const BUFFERED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` Methods can then use bounds like `where E: Event<TARGETED = true>` or `where E: Event<BUFFERED = true>` to limit APIs to specific kinds of events. This would keep everything under one `Event` trait, but I don't think it's necessarily a good idea. It makes APIs harder to read, and docs can't easily refer to specific types of events. You can also create weird invariants: what if you specify `TARGETED = false`, but have `Traversal` and/or `AUTO_PROPAGATE` enabled? ### 2. `Event` and `Trigger` Another option is to only split the traits between buffered events and observer events, since that is the main thing people have been asking for, and they have the largest API difference. If we did this, I think we would need to make the terms *clearly* separate. We can't really use `Event` and `BufferedEvent` as the names, since it would be strange that `BufferedEvent` doesn't implement `Event`. Something like `ObserverEvent` and `BufferedEvent` could work, but it'd be more verbose. For this approach, I would instead keep `Event` for the current `EventReader`/`EventWriter` API, and call the observer event a `Trigger`, since the "trigger" terminology is already used in the observer context within Bevy (both as a noun and a verb). This is also what a long [bikeshed on Discord](https://discord.com/channels/691052431525675048/749335865876021248/1298057661878898791) seemed to land on at the end of last year. ```rust // For `EventReader`/`EventWriter` pub trait Event: Send + Sync + 'static {} // For observers pub trait Trigger: Send + Sync + 'static { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; const TARGETED: bool = false; fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } ``` The problem is that "event" is just a really good term for something that "happens". Observers are rapidly becoming the more prominent API, so it'd be weird to give them the `Trigger` name and leave the good `Event` name for the less common API. So, even though a split like this seems neat on the surface, I think it ultimately wouldn't really work. We want to keep the `Event` name for observer events, and there is no good alternative for the buffered variant. (`Message` was suggested, but saying stuff like "sends a collision message" is weird.) ### 3. `GlobalEvent` + `TargetedEvent` What if instead of focusing on the buffered vs. observed split, we *only* make a distinction between global and targeted events? ```rust // A shared event trait to allow global observers to work pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For buffered events and non-targeted observer events pub trait GlobalEvent: Event {} // For targeted observer events pub trait TargetedEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is actually the first approach I implemented, and it has the neat characteristic that you can only use non-targeted APIs like `trigger` with a `GlobalEvent` and targeted APIs like `trigger_targets` with a `TargetedEvent`. You have full control over whether the entity should or should not have a target, as they are fully distinct at the type-level. However, there's a few problems: - There is no type-level indication of whether a `GlobalEvent` supports buffered events or just non-targeted observer events - An `Event` on its own does literally nothing, it's just a shared trait required to make global observers accept both non-targeted and targeted events - If an event is both a `GlobalEvent` and `TargetedEvent`, global observers again have ambiguity on whether an event has a target or not, undermining some of the benefits - The names are not ideal ### 4. `Event` and `EntityEvent` We can fix some of the problems of Alternative 3 by accepting that targeted events can also be used in non-targeted contexts, and simply having the `Event` and `EntityEvent` traits: ```rust // For buffered events and non-targeted observer events pub trait Event: Send + Sync + 'static { fn register_component_id(world: &mut World) -> ComponentId { ... } fn component_id(world: &World) -> Option<ComponentId> { ... } } // For targeted observer events pub trait EntityEvent: Event { type Traversal: Traversal<Self>; const AUTO_PROPAGATE: bool = false; } ``` This is essentially identical to this PR, just without a dedicated `BufferedEvent`. The remaining major "problem" is that there is still zero type-level indication of whether an `Event` event *actually* supports the buffered API. This leads us to the solution proposed in this PR, using `Event`, `EntityEvent`, and `BufferedEvent`. </details> ## Conclusion The `Event` + `EntityEvent` + `BufferedEvent` split proposed in this PR aims to solve all the common problems with Bevy's current event model while keeping the "weirdness" factor minimal. It splits in terms of both the push vs. pull *and* global vs. targeted aspects, while maintaining a shared concept for an "event". ### Why I Like This - The term "event" remains as a single concept for all the different kinds of events in Bevy. - Despite all event types being "events", they use fundamentally different APIs. Instead of assuming that you can use an event type with any pattern (when only one is typically supported), you explicitly opt in to each one with dedicated traits. - Using separate traits for each type of event helps with documentation and clearer function signatures. - I can safely make assumptions on expected usage. - If I see that an event is an `EntityEvent`, I can assume that I can use `observe` on it and get targeted events. - If I see that an event is a `BufferedEvent`, I can assume that I can use `EventReader` to read events. - If I see both `EntityEvent` and `BufferedEvent`, I can assume that both APIs are supported. In summary: This allows for a unified concept for events, while limiting the different ways to use them with opt-in traits. No more guess-work involved when using APIs. ### Problems? - Because `BufferedEvent` implements `Event` (for more consistent semantics etc.), you can still use all buffered events for non-targeted observers. I think this is fine/good. The important part is that if you see that an event implements `BufferedEvent`, you know that the `EventReader`/`EventWriter` API should be supported. Whether it *also* supports other APIs is secondary. - I currently only support `trigger_targets` for an `EntityEvent`. However, you can technically target components too, without targeting any entities. I consider that such a niche and advanced use case that it's not a huge problem to only support it for `EntityEvent`s, but we could also split `trigger_targets` into `trigger_entities` and `trigger_components` if we wanted to (or implement components as entities :P). - You can still trigger an `EntityEvent` *without* targets. I consider this correct, since `Event` implements the non-targeted behavior, and it'd be weird if implementing another trait *removed* behavior. However, it does mean that global observers for entity events can technically return `Entity::PLACEHOLDER` again (since I got rid of the `Option<Entity>` added in #19440 for ergonomics). I think that's enough of an edge case that it's not a huge problem, but it is worth keeping in mind. - ~~Deriving both `EntityEvent` and `BufferedEvent` for the same type currently duplicates the `Event` implementation, so you instead need to manually implement one of them.~~ Changed to always requiring `Event` to be derived. ## Related Work There are plans to implement multi-event support for observers, especially for UI contexts. [Cart's example](https://github.com/bevyengine/bevy/issues/14649#issuecomment-2960402508) API looked like this: ```rust // Truncated for brevity trigger: Trigger<( OnAdd<Pressed>, OnRemove<Pressed>, OnAdd<InteractionDisabled>, OnRemove<InteractionDisabled>, OnInsert<Hovered>, )>, ``` I believe this shouldn't be in conflict with this PR. If anything, this PR might *help* achieve the multi-event pattern for entity observers with fewer footguns: by statically enforcing that all of these events are `EntityEvent`s in the context of `EntityCommands::observe`, we can avoid misuse or weird cases where *some* events inside the trigger are targeted while others are not. |
||
|
|
1dfe83bb8d |
Fix headless_renderer example and mention Screenshot. (#19598)
## Objective - Makes `headless_renderer` example work instead of exiting without effect. - Guides users who actually just need [`Screenshot`](https://docs.rs/bevy/0.16.1/bevy/render/view/window/screenshot/struct.Screenshot.html) to use that instead. This PR was inspired by my own efforts to do headless rendering, in which the complexity of the `headless_renderer` example was a distraction, and this comment from https://github.com/bevyengine/bevy/issues/12478#issuecomment-2094925039 : > The example added in https://github.com/bevyengine/bevy/pull/13006 would benefit from this change: be sure to clean it up when tackling this work :) That “cleanup” was not done, and I thought to do it, but it seems to me that using `Screenshot` (in its current form) in the example would not be correct, because — if I understand correctly — the example is trying to, potentially, capture many *consecutive* frames, whereas `Screenshot` by itself gives no means to capture multiple frames without gaps or duplicates. But perhaps I am wrong (the code is complex and not clearly documented), or perhaps that feature isn’t worth preserving. In that case, let me know and I will revise this PR. ## Solution - Added `exit_condition: bevy::window::ExitCondition::DontExit` - Added a link to `Screenshot` in the crate documentation. ## Testing - Ran the example and confirmed that it now writes an image file and then exits. |
||
|
|
a8376e982e |
Rename Timer::finished and Timer::paused to is_finished and is_paused (#19386)
# Objective Renames `Timer::finished` and `Timer::paused` to `Timer::is_finished` and `Timer::is_paused` to align the public APIs for `Time`, `Timer`, and `Stopwatch`. Fixes #19110 |
||
|
|
3690ad5b0b |
Remove apostrophes in possessive its (#19244)
# Objective Fix some grammatical errors: it's -> its Not the most useful commit in the world, but I saw a couple of these and decided to fix the lot. ## Solution - ## Testing - |
||
|
|
7b1c9f192e |
Adopt consistent FooSystems naming convention for system sets (#18900)
# Objective Fixes a part of #14274. Bevy has an incredibly inconsistent naming convention for its system sets, both internally and across the ecosystem. <img alt="System sets in Bevy" src="https://github.com/user-attachments/assets/d16e2027-793f-4ba4-9cc9-e780b14a5a1b" width="450" /> *Names of public system set types in Bevy* Most Bevy types use a naming of `FooSystem` or just `Foo`, but there are also a few `FooSystems` and `FooSet` types. In ecosystem crates on the other hand, `FooSet` is perhaps the most commonly used name in general. Conventions being so wildly inconsistent can make it harder for users to pick names for their own types, to search for system sets on docs.rs, or to even discern which types *are* system sets. To reign in the inconsistency a bit and help unify the ecosystem, it would be good to establish a common recommended naming convention for system sets in Bevy itself, similar to how plugins are commonly suffixed with `Plugin` (ex: `TimePlugin`). By adopting a consistent naming convention in first-party Bevy, we can softly nudge ecosystem crates to follow suit (for types where it makes sense to do so). Choosing a naming convention is also relevant now, as the [`bevy_cli` recently adopted lints](https://github.com/TheBevyFlock/bevy_cli/pull/345) to enforce naming for plugins and system sets, and the recommended naming used for system sets is still a bit open. ## Which Name To Use? Now the contentious part: what naming convention should we actually adopt? This was discussed on the Bevy Discord at the end of last year, starting [here](<https://discord.com/channels/691052431525675048/692572690833473578/1310659954683936789>). `FooSet` and `FooSystems` were the clear favorites, with `FooSet` very narrowly winning an unofficial poll. However, it seems to me like the consensus was broadly moving towards `FooSystems` at the end and after the poll, with Cart ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311140204974706708)) and later Alice ([source](https://discord.com/channels/691052431525675048/692572690833473578/1311092530732859533)) and also me being in favor of it. Let's do a quick pros and cons list! Of course these are just what I thought of, so take it with a grain of salt. `FooSet`: - Pro: Nice and short! - Pro: Used by many ecosystem crates. - Pro: The `Set` suffix comes directly from the trait name `SystemSet`. - Pro: Pairs nicely with existing APIs like `in_set` and `configure_sets`. - Con: `Set` by itself doesn't actually indicate that it's related to systems *at all*, apart from the implemented trait. A set of what? - Con: Is `FooSet` a set of `Foo`s or a system set related to `Foo`? Ex: `ContactSet`, `MeshSet`, `EnemySet`... `FooSystems`: - Pro: Very clearly indicates that the type represents a collection of systems. The actual core concept, system(s), is in the name. - Pro: Parallels nicely with `FooPlugins` for plugin groups. - Pro: Low risk of conflicts with other names or misunderstandings about what the type is. - Pro: In most cases, reads *very* nicely and clearly. Ex: `PhysicsSystems` and `AnimationSystems` as opposed to `PhysicsSet` and `AnimationSet`. - Pro: Easy to search for on docs.rs. - Con: Usually results in longer names. - Con: Not yet as widely used. Really the big problem with `FooSet` is that it doesn't actually describe what it is. It describes what *kind of thing* it is (a set of something), but not *what it is a set of*, unless you know the type or check its docs or implemented traits. `FooSystems` on the other hand is much more self-descriptive in this regard, at the cost of being a bit longer to type. Ultimately, in some ways it comes down to preference and how you think of system sets. Personally, I was originally in favor of `FooSet`, but have been increasingly on the side of `FooSystems`, especially after seeing what the new names would actually look like in Avian and now Bevy. I prefer it because it usually reads better, is much more clearly related to groups of systems than `FooSet`, and overall *feels* more correct and natural to me in the long term. For these reasons, and because Alice and Cart also seemed to share a preference for it when it was previously being discussed, I propose that we adopt a `FooSystems` naming convention where applicable. ## Solution Rename Bevy's system set types to use a consistent `FooSet` naming where applicable. - `AccessibilitySystem` → `AccessibilitySystems` - `GizmoRenderSystem` → `GizmoRenderSystems` - `PickSet` → `PickingSystems` - `RunFixedMainLoopSystem` → `RunFixedMainLoopSystems` - `TransformSystem` → `TransformSystems` - `RemoteSet` → `RemoteSystems` - `RenderSet` → `RenderSystems` - `SpriteSystem` → `SpriteSystems` - `StateTransitionSteps` → `StateTransitionSystems` - `RenderUiSystem` → `RenderUiSystems` - `UiSystem` → `UiSystems` - `Animation` → `AnimationSystems` - `AssetEvents` → `AssetEventSystems` - `TrackAssets` → `AssetTrackingSystems` - `UpdateGizmoMeshes` → `GizmoMeshSystems` - `InputSystem` → `InputSystems` - `InputFocusSet` → `InputFocusSystems` - `ExtractMaterialsSet` → `MaterialExtractionSystems` - `ExtractMeshesSet` → `MeshExtractionSystems` - `RumbleSystem` → `RumbleSystems` - `CameraUpdateSystem` → `CameraUpdateSystems` - `ExtractAssetsSet` → `AssetExtractionSystems` - `Update2dText` → `Text2dUpdateSystems` - `TimeSystem` → `TimeSystems` - `AudioPlaySet` → `AudioPlaybackSystems` - `SendEvents` → `EventSenderSystems` - `EventUpdates` → `EventUpdateSystems` A lot of the names got slightly longer, but they are also a lot more consistent, and in my opinion the majority of them read much better. For a few of the names I took the liberty of rewording things a bit; definitely open to any further naming improvements. There are still also cases where the `FooSystems` naming doesn't really make sense, and those I left alone. This primarily includes system sets like `Interned<dyn SystemSet>`, `EnterSchedules<S>`, `ExitSchedules<S>`, or `TransitionSchedules<S>`, where the type has some special purpose and semantics. ## Todo - [x] Should I keep all the old names as deprecated type aliases? I can do this, but to avoid wasting work I'd prefer to first reach consensus on whether these renames are even desired. - [x] Migration guide - [x] Release notes |
||
|
|
3b24f520b9 |
feat(log): support customizing default log formatting (#17722)
The LogPlugin now allows overriding the default
`tracing_subscriber::fmt::Layer` through a new `fmt_layer` option. This
enables customization of the default log output format without having to
replace the entire logging system.
For example, to disable timestamps in the log output:
```rust
fn fmt_layer(_app: &mut App) -> Option<bevy::log::BoxedFmtLayer> {
Some(Box::new(
bevy::log::tracing_subscriber::fmt::Layer::default()
.without_time()
.with_writer(std::io::stderr),
))
}
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(bevy::log::LogPlugin {
fmt_layer,
..default()
}))
.run();
}
```
This is different from the existing `custom_layer` option, because that
option _adds_ additional layers to the subscriber, but can't modify the
default formatter layer (at least, not to my knowledge).
I almost always disable timestamps in my Bevy logs, and usually also
tweak other default log formatting (such as `with_span_events`), which
made it so that I always had to disable the default logger. This allows
me to use everything the Bevy logger supports (including tracy support),
while still formatting the default logs the way I like them.
---------
Signed-off-by: Jean Mertz <git@jeanmertz.com>
|
||
|
|
694db96ab8 |
Fix compile errors on headless example (#18497)
# Objective
- Fixes compile errors on headless example when running `cargo run
--example headless --no-default-features`
```
error[E0432]: unresolved import `bevy::log`
--> examples/app/headless.rs:13:39
|
13 | use bevy::{app::ScheduleRunnerPlugin, log::LogPlugin, prelude::*};
| ^^^ could not find `log` in `bevy`
For more information about this error, try `rustc --explain E0432`.
error: could not compile `bevy` (example "headless") due to 1 previous error
```
## Solution
- Since commit
|
||
|
|
5f86668bbb |
Renamed EventWriter::send methods to write. (#17977)
Fixes #17856. ## Migration Guide - `EventWriter::send` has been renamed to `EventWriter::write`. - `EventWriter::send_batch` has been renamed to `EventWriter::write_batch`. - `EventWriter::send_default` has been renamed to `EventWriter::write_default`. --------- Co-authored-by: François Mockers <mockersf@gmail.com> |
||
|
|
3978ba9783 |
Allowed creating uninitialized images (for use as storage textures) (#17760)
# Objective https://github.com/bevyengine/bevy/issues/17746 ## Solution - Change `Image.data` from being a `Vec<u8>` to a `Option<Vec<u8>>` - Added functions to help with creating images ## Testing - Did you test these changes? If so, how? All current tests pass Tested a variety of existing examples to make sure they don't crash (they don't) - If relevant, what platforms did you test these changes on, and are there any important ones you can't test? Linux x86 64-bit NixOS --- ## Migration Guide Code that directly access `Image` data will now need to use unwrap or handle the case where no data is provided. Behaviour of new_fill slightly changed, but not in a way that is likely to affect anything. It no longer panics and will fill the whole texture instead of leaving black pixels if the data provided is not a nice factor of the size of the image. --------- Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
|
|
669d139c13 |
Upgrade to wgpu v24 (#17542)
Didn't remove WgpuWrapper. Not sure if it's needed or not still. ## Testing - Did you test these changes? If so, how? Example runner - Are there any parts that need more testing? Web (portable atomics thingy?), DXC. ## Migration Guide - Bevy has upgraded to [wgpu v24](https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md#v2400-2025-01-15). - When using the DirectX 12 rendering backend, the new priority system for choosing a shader compiler is as follows: - If the `WGPU_DX12_COMPILER` environment variable is set at runtime, it is used - Else if the new `statically-linked-dxc` feature is enabled, a custom version of DXC will be statically linked into your app at compile time. - Else Bevy will look in the app's working directory for `dxcompiler.dll` and `dxil.dll` at runtime. - Else if they are missing, Bevy will fall back to FXC (not recommended) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: IceSentry <c.giguere42@gmail.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
|
|
a371ee3019 |
Remove tracing re-export from bevy_utils (#17161)
# Objective - Contributes to #11478 ## Solution - Made `bevy_utils::tracing` `doc(hidden)` - Re-exported `tracing` from `bevy_log` for end-users - Added `tracing` directly to crates that need it. ## Testing - CI --- ## Migration Guide If you were importing `tracing` via `bevy::utils::tracing`, instead use `bevy::log::tracing`. Note that many items within `tracing` are also directly re-exported from `bevy::log` as well, so you may only need `bevy::log` for the most common items (e.g., `warn!`, `trace!`, etc.). This also applies to the `log_once!` family of macros. ## Notes - While this doesn't reduce the line-count in `bevy_utils`, it further decouples the internal crates from `bevy_utils`, making its eventual removal more feasible in the future. - I have just imported `tracing` as we do for all dependencies. However, a workspace dependency may be more appropriate for version management. |
||
|
|
3c829d7f68 |
Remove everything except Instant from bevy_utils::time (#17158)
# Objective - Contributes to #11478 - Contributes to #16877 ## Solution - Removed everything except `Instant` from `bevy_utils::time` ## Testing - CI --- ## Migration Guide If you relied on any of the following from `bevy_utils::time`: - `Duration` - `TryFromFloatSecsError` Import these directly from `core::time` regardless of platform target (WASM, mobile, etc.) If you relied on any of the following from `bevy_utils::time`: - `SystemTime` - `SystemTimeError` Instead import these directly from either `std::time` or `web_time` as appropriate for your target platform. ## Notes `Duration` and `TryFromFloatSecsError` are both re-exports from `core::time` regardless of whether they are used from `web_time` or `std::time`, so there is no value gained from re-exporting them from `bevy_utils::time` as well. As for `SystemTime` and `SystemTimeError`, no Bevy internal crates or examples rely on these types. Since Bevy doesn't have a `Time<Wall>` resource for interacting with wall-time (and likely shouldn't need one), I think removing these from `bevy_utils` entirely and waiting for a use-case to justify inclusion is a reasonable path forward. |
||
|
|
64efd08e13 |
Prefer Display over Debug (#16112)
# Objective Fixes #16104 ## Solution I removed all instances of `:?` and put them back one by one where it caused an error. I removed some bevy_utils helper functions that were only used in 2 places and don't add value. See: #11478 ## Testing CI should catch the mistakes ## Migration Guide `bevy::utils::{dbg,info,warn,error}` were removed. Use `bevy::utils::tracing::{debug,info,warn,error}` instead. --------- Co-authored-by: SpecificProtagonist <vincentjunge@posteo.net> |
||
|
|
39f9e07b5f |
Support scale factor for image render targets (#16796)
# Objective I have something of a niche use case. I have a camera rendering pixel art with a scale factor set, and another camera that renders to an off-screen texture which is supposed to match the main camera exactly. However, when computing camera target info, Bevy [hardcodes a scale factor of 1.0](https://github.com/bevyengine/bevy/blob/116c2b02fe8a7589d1777af7dabd84dc756b5b0d/crates/bevy_render/src/camera/camera.rs#L828) for image targets which means that my main camera and my image target camera get different `OrthographicProjections` calculated. ## Solution This PR adds an `ImageRenderTarget` struct which allows scale factors to be specified. ## Testing I tested the affected examples on macOS and they still work. This is an additive change and should not break any existing code, apart from what is trivially fixable by following compiler error messages. --- ## Migration Guide `RenderTarget::Image` now takes an `ImageRenderTarget` instead of a `Handle<Image>`. You can call `handle.into()` to construct an `ImageRenderTarget` using the same settings as before. |
||
|
|
73d68d60bb |
Change GpuImage::size from UVec2 to Extent3d (#16815)
# Objective When preparing `GpuImage`s, we currently discard the `depth_or_array_layers` of the `Image`'s size by converting it into a `UVec2`. Fixes #16715. ## Solution Change `GpuImage::size` to `Extent3d`, and just pass that through when creating `GpuImage`s. Also copy the `aspect_ratio`, and `size` (now `size_2d` for disambiguation from the field) functions from `Image` to `GpuImage` for ease of use with 2D textures. I originally copied all size-related functions (like `width`, and `height`), but i think they are unnecessary considering how visible the `size` field on `GpuImage` is compared to `Image`. ## Testing Tested via `cargo r -p ci` for everything except docs, when generating docs it keeps spitting out a ton of ``` error[E0554]: `#![feature]` may not be used on the stable release channel --> crates/bevy_dylib/src/lib.rs:1:21 | 1 | #![cfg_attr(docsrs, feature(doc_auto_cfg))] | ``` Not sure why this is happening, but it also happens without my changes, so it's almost certainly some strange issue specific to my machine. ## Migration Guide - `GpuImage::size` is now an `Extent3d`. To easily get 2D size, use `size_2d()`. |
||
|
|
6e81a05c93 |
Headless by features (#16401)
# Objective - Fixes #16152 ## Solution - Put `bevy_window` and `bevy_a11y` behind the `bevy_window` feature. they were the only difference - Add `ScheduleRunnerPlugin` to the `DefaultPlugins` when `bevy_window` is disabled - Remove `HeadlessPlugins` - Update the `headless` example |
||
|
|
40640fdf42 |
Don't reëxport bevy_image from bevy_render (#16163)
# Objective Fixes #15940 ## Solution Remove the `pub use` and fix the compile errors. Make `bevy_image` available as `bevy::image`. ## Testing Feature Frenzy would be good here! Maybe I'll learn how to use it if I have some time this weekend, or maybe a reviewer can use it. ## Migration Guide Use `bevy_image` instead of `bevy_render::texture` items. --------- Co-authored-by: chompaa <antony.m.3012@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
|
|
67567702f0 |
fix: removed WinitPlugin from headless_renderer example (#15818)
# Objective The `headless_renderer` example is meant to showcase running bevy as a headless renderer, but if run without a display server (for example, over an SSH connection), a panic occurs in `bevy_winit` despite never creating a window: ```rust bevy_winit-0.14.1/src/lib.rs:132:14: winit-0.30.5/src/platform_impl/linux/mod.rs: neither WAYLAND_DISPLAY nor WAYLAND_SOCKET nor DISPLAY is set. ``` This example should run successfully in situations without an available display server, as although the GPU is used for rendering, no window is ever created. ## Solution Disabling WinitPlugin, where the above panic occurs, allows the example to run in a fully headless environment. ## Testing - I tested this change in normal circumstances with a display server (on macOS Sequoia and Asahi Linux) and behavior was normal. - I tested with no display server by connecting via SSH, and running the example (on Asahi Linux). Previously this panics, but with this change it runs normally. ## Considerations - One could argue that ultimately the user should not need to remove `WinitPlugin`, and instead bevy should only throw the above panic when the application first attempts to create a window. |
||
|
|
015f2c69ca |
Merge Style properties into Node. Use ComputedNode for computed properties. (#15975)
# Objective Continue improving the user experience of our UI Node API in the direction specified by [Bevy's Next Generation Scene / UI System](https://github.com/bevyengine/bevy/discussions/14437) ## Solution As specified in the document above, merge `Style` fields into `Node`, and move "computed Node fields" into `ComputedNode` (I chose this name over something like `ComputedNodeLayout` because it currently contains more than just layout info. If we want to break this up / rename these concepts, lets do that in a separate PR). `Style` has been removed. This accomplishes a number of goals: ## Ergonomics wins Specifying both `Node` and `Style` is now no longer required for non-default styles Before: ```rust commands.spawn(( Node::default(), Style { width: Val::Px(100.), ..default() }, )); ``` After: ```rust commands.spawn(Node { width: Val::Px(100.), ..default() }); ``` ## Conceptual clarity `Style` was never a comprehensive "style sheet". It only defined "core" style properties that all `Nodes` shared. Any "styled property" that couldn't fit that mold had to be in a separate component. A "real" style system would style properties _across_ components (`Node`, `Button`, etc). We have plans to build a true style system (see the doc linked above). By moving the `Style` fields to `Node`, we fully embrace `Node` as the driving concept and remove the "style system" confusion. ## Next Steps * Consider identifying and splitting out "style properties that aren't core to Node". This should not happen for Bevy 0.15. --- ## Migration Guide Move any fields set on `Style` into `Node` and replace all `Style` component usage with `Node`. Before: ```rust commands.spawn(( Node::default(), Style { width: Val::Px(100.), ..default() }, )); ``` After: ```rust commands.spawn(Node { width: Val::Px(100.), ..default() }); ``` For any usage of the "computed node properties" that used to live on `Node`, use `ComputedNode` instead: Before: ```rust fn system(nodes: Query<&Node>) { for node in &nodes { let computed_size = node.size(); } } ``` After: ```rust fn system(computed_nodes: Query<&ComputedNode>) { for computed_node in &computed_nodes { let computed_size = computed_node.size(); } } ``` |