Large scenes (#22409)

# Objective

Move https://github.com/DGriffin91/bevy_bistro_scene and
https://github.com/DGriffin91/bevy_caldera_scene into the bevy repo for
allow for easier testing.

This also adds https://github.com/DGriffin91/bevy_mod_mipmap_generator
in the new `large_scene` folder because GPU texture fetch cache locality
is important. This could be eventually available more generally as a
part of bevy. But wanted to get something quick and basic in for now.

Not totally sure what these readme's should look like. I mostly just
copied them over and made a few tweaks.

This PR is meant to just be an initial starting point, it does not
address automatically acquiring the required assets.

---------

Co-authored-by: François Mockers <francois.mockers@vleue.com>
This commit is contained in:
Griffin
2026-01-08 16:59:28 -08:00
committed by GitHub
parent 39cb531992
commit 61dab1013a
14 changed files with 2795 additions and 0 deletions
+5
View File
@@ -23,6 +23,8 @@ assets/**/*.meta
crates/bevy_asset/imported_assets
imported_assets
.web-asset-cache
examples/large_scenes/bistro/assets/*
examples/large_scenes/caldera_hotel/assets/*
# Bevy Examples
example_showcase_config.ron
@@ -33,3 +35,6 @@ assets/scenes/load_scene_example-new.scn.ron
# Generated by "examples/window/screenshot.rs"
**/screenshot-*.png
# Generated by "examples/large_scenes"
compressed_texture_cache
+5
View File
@@ -30,6 +30,11 @@ members = [
"examples/no_std/*",
# Examples of compiling Bevy with automatic reflect type registration for platforms without `inventory` support.
"examples/reflection/auto_register_static",
# Examples of large bevy scenes.
"examples/large_scenes/bistro",
"examples/large_scenes/caldera_hotel",
# Mipmap generator for testing large bevy scenes with mips and texture compression.
"examples/large_scenes/mipmap_generator",
# Benchmarks
"benches",
# Internal tools that are not published.
+24
View File
@@ -0,0 +1,24 @@
[package]
name = "bistro"
version = "0.1.0"
edition = "2024"
publish = false
license = "MIT OR Apache-2.0"
[dependencies]
bevy = { path = "../../../", features = [
"bevy_camera_controller",
"free_camera",
] }
mipmap_generator = { path = "../mipmap_generator" }
argh = "0.1.12"
[features]
default = ["debug_text"]
debug_text = ["bevy/bevy_ui"]
pbr_transmission_textures = ["bevy/pbr_transmission_textures"]
pbr_multi_layer_material_textures = ["bevy/pbr_multi_layer_material_textures"]
pbr_anisotropy_texture = ["bevy/pbr_anisotropy_texture"]
pbr_specular_textures = ["bevy/pbr_specular_textures"]
+58
View File
@@ -0,0 +1,58 @@
# Bistro Example
Download Amazon Lumberyard version of bistro [from developer.nvidia.com](<https://developer.nvidia.com/orca/amazon-lumberyard-bistro>) (or see link below for processed glTF files with instancing)
Reexport BistroExterior.fbx and BistroInterior_Wine.fbx as GLTF files (in .gltf + .bin + textures format). Move the files into the respective bistro_exterior and bistro_interior_wine folders.
- Press 1, 2 or 3 for various camera positions.
- Press B for benchmark.
- Press to animate camera along path.
Run with texture compression while caching compressed images to disk for faster startup times:
`cargo run -p bistro --release --features mipmap_generator/compress -- --cache`
```console
Options:
--no-gltf-lights disable glTF lights
--minimal disable bloom, AO, AA, shadows
--compress compress textures (if they are not already, requires
compress feature)
--low-quality-compression
if low_quality_compression is set, only 0.5 byte/px formats
will be used (BC1, BC4) unless the alpha channel is in use,
then BC3 will be used. When low quality is set, compression
is generally faster than CompressionSpeed::UltraFast and
CompressionSpeed is ignored.
--cache compressed texture cache (requires compress feature)
--count quantity of bistros
--spin spin the bistros and camera
--hide-frame-time don't show frame time
--deferred use deferred shading
--no-frustum-culling
disable all frustum culling. Stresses queuing and batching
as all mesh material entities in the scene are always drawn.
--no-automatic-batching
disable automatic batching. Skips batching resulting in
heavy stress on render pass draw command encoding.
--no-view-occlusion-culling
disable gpu occlusion culling for the camera
--no-shadow-occlusion-culling
disable gpu occlusion culling for the directional light
--no-indirect-drawing
disable indirect drawing.
--no-cpu-culling disable CPU culling.
--help, help display usage information
```
[Alternate processed files with instancing (glTF files on discord):](https://discord.com/channels/691052431525675048/1237853896471220314/1237859248067575910)
- Fixed most of the metallic from fbx issue by using a script that makes everything dielectric unless it has metal in the name of the material
- Made the plants alpha clip instead of blend
- Setup the glassware/liquid materials correctly
- Mesh origins are at individual bounding box center instead of world origin
- Removed duplicate vertices (There were lots of odd cases, often making one instance not match another that would otherwise exactly match)
- Made the scene use instances (unique mesh count 3880 -> 1188)
- Removed 2 cases where duplicated meshes were overlapping
- Setup some of the interior/exterior lights with actual sources
- Setup some basic fake GI
- Use included scene HDRI for IBL
+638
View File
@@ -0,0 +1,638 @@
// Press B for benchmark.
// Preferably after frame time is reading consistently, rust-analyzer has calmed down, and with locked gpu clocks.
use std::{
f32::consts::PI,
ops::{Add, Mul, Sub},
path::PathBuf,
time::Instant,
};
use argh::FromArgs;
use bevy::{
anti_alias::taa::TemporalAntiAliasing,
camera::visibility::{NoCpuCulling, NoFrustumCulling},
camera_controller::free_camera::{FreeCamera, FreeCameraPlugin},
core_pipeline::prepass::{DeferredPrepass, DepthPrepass},
diagnostic::DiagnosticsStore,
light::TransmittedShadowReceiver,
pbr::{DefaultOpaqueRendererMethod, ScreenSpaceAmbientOcclusion},
post_process::bloom::Bloom,
render::{
batching::NoAutomaticBatching, experimental::occlusion_culling::OcclusionCulling,
render_resource::Face, view::NoIndirectDrawing,
},
scene::SceneInstanceReady,
};
use bevy::{
camera::ScreenSpaceTransmissionQuality, light::CascadeShadowConfigBuilder, render::view::Hdr,
};
use bevy::{
diagnostic::FrameTimeDiagnosticsPlugin,
prelude::*,
window::{PresentMode, WindowResolution},
winit::{UpdateMode, WinitSettings},
};
use mipmap_generator::{
generate_mipmaps, MipmapGeneratorDebugTextPlugin, MipmapGeneratorPlugin,
MipmapGeneratorSettings,
};
use crate::light_consts::lux;
#[derive(FromArgs, Resource, Clone)]
/// Config
pub struct Args {
/// disable glTF lights
#[argh(switch)]
no_gltf_lights: bool,
/// disable bloom, AO, AA, shadows
#[argh(switch)]
minimal: bool,
/// compress textures (if they are not already, requires compress feature)
#[argh(switch)]
compress: bool,
/// if low_quality_compression is set, only 0.5 byte/px formats will be used (BC1, BC4) unless the alpha channel is in use, then BC3 will be used.
/// When low quality is set, compression is generally faster than CompressionSpeed::UltraFast and CompressionSpeed is ignored.
#[argh(switch)]
low_quality_compression: bool,
/// compressed texture cache (requires compress feature)
#[argh(switch)]
cache: bool,
/// quantity of bistros
#[argh(option, default = "1")]
count: u32,
/// spin the bistros and camera
#[argh(switch)]
spin: bool,
/// don't show frame time
#[argh(switch)]
hide_frame_time: bool,
/// use deferred shading
#[argh(switch)]
deferred: bool,
/// disable all frustum culling. Stresses queuing and batching as all mesh material entities in the scene are always drawn.
#[argh(switch)]
no_frustum_culling: bool,
/// disable automatic batching. Skips batching resulting in heavy stress on render pass draw command encoding.
#[argh(switch)]
no_automatic_batching: bool,
/// disable gpu occlusion culling for the camera
#[argh(switch)]
no_view_occlusion_culling: bool,
/// disable gpu occlusion culling for the directional light
#[argh(switch)]
no_shadow_occlusion_culling: bool,
/// disable indirect drawing.
#[argh(switch)]
no_indirect_drawing: bool,
/// disable CPU culling.
#[argh(switch)]
no_cpu_culling: bool,
}
pub fn main() {
let args: Args = argh::from_env();
let mut app = App::new();
app.init_resource::<CameraPositions>()
.init_resource::<FrameLowHigh>()
.insert_resource(GlobalAmbientLight::NONE)
.insert_resource(args.clone())
.insert_resource(ClearColor(Color::srgb(1.75, 1.9, 1.99)))
.insert_resource(WinitSettings {
focused_mode: UpdateMode::Continuous,
unfocused_mode: UpdateMode::Continuous,
})
.add_plugins(DefaultPlugins.set(WindowPlugin {
primary_window: Some(Window {
present_mode: PresentMode::Immediate,
resolution: WindowResolution::new(1920, 1080).with_scale_factor_override(1.0),
..default()
}),
..default()
}))
// Generating mipmaps takes a minute
// Mipmap generation be skipped if ktx2 is used
.insert_resource(MipmapGeneratorSettings {
anisotropic_filtering: 16,
compression: args.compress.then(Default::default),
compressed_image_data_cache_path: if args.cache {
Some(PathBuf::from("compressed_texture_cache"))
} else {
None
},
low_quality: args.low_quality_compression,
..default()
})
.add_plugins((
FrameTimeDiagnosticsPlugin {
max_history_length: 1000,
..default()
},
MipmapGeneratorPlugin,
MipmapGeneratorDebugTextPlugin,
FreeCameraPlugin,
))
.add_systems(Startup, setup)
.add_systems(
Update,
(
generate_mipmaps::<StandardMaterial>,
input,
run_animation,
spin,
frame_time_system,
benchmark,
)
.chain(),
);
if args.no_frustum_culling {
app.add_systems(Update, add_no_frustum_culling);
}
if args.deferred {
app.insert_resource(DefaultOpaqueRendererMethod::deferred());
}
app.run();
}
#[derive(Component)]
pub struct Spin;
#[derive(Component)]
struct FrameTimeText;
pub fn setup(mut commands: Commands, asset_server: Res<AssetServer>, args: Res<Args>) {
println!("Loading models, generating mipmaps");
let bistro_exterior = asset_server.load("bistro_exterior/BistroExterior.gltf#Scene0");
commands
.spawn((SceneRoot(bistro_exterior.clone()), Spin))
.observe(proc_scene);
let bistro_interior = asset_server.load("bistro_interior_wine/BistroInterior_Wine.gltf#Scene0");
commands
.spawn((SceneRoot(bistro_interior.clone()), Spin))
.observe(proc_scene);
let mut count = 0;
if args.count > 1 {
let quantity = args.count - 1;
let side = (quantity as f32).sqrt().ceil() as i32 / 2;
'outer: for x in -side..=side {
for z in -side..=side {
if count >= quantity {
break 'outer;
}
if x == 0 && z == 0 {
continue;
}
commands
.spawn((
SceneRoot(bistro_exterior.clone()),
Transform::from_xyz(x as f32 * 150.0, 0.0, z as f32 * 150.0),
Spin,
))
.observe(proc_scene);
commands
.spawn((
SceneRoot(bistro_interior.clone()),
Transform::from_xyz(x as f32 * 150.0, 0.3, z as f32 * 150.0 - 0.2),
Spin,
))
.observe(proc_scene);
count += 1;
}
}
}
if !args.no_gltf_lights {
// In Repo glTF
commands.spawn((
SceneRoot(asset_server.load("BistroExteriorFakeGI.gltf#Scene0")),
Spin,
));
}
// Sun
commands
.spawn((
Transform::from_rotation(Quat::from_euler(EulerRot::XYZ, PI * -0.35, PI * -0.13, 0.0)),
DirectionalLight {
color: Color::srgb(1.0, 0.87, 0.78),
illuminance: lux::FULL_DAYLIGHT,
shadows_enabled: !args.minimal,
shadow_depth_bias: 0.1,
shadow_normal_bias: 0.2,
..default()
},
CascadeShadowConfigBuilder {
num_cascades: 3,
minimum_distance: 0.05,
maximum_distance: 100.0,
first_cascade_far_bound: 10.0,
overlap_proportion: 0.2,
}
.build(),
))
.insert_if(OcclusionCulling, || !args.no_shadow_occlusion_culling);
// Camera
let mut cam = commands.spawn((
Msaa::Off,
Camera3d {
screen_space_specular_transmission_steps: 0,
screen_space_specular_transmission_quality: ScreenSpaceTransmissionQuality::Low,
..default()
},
Hdr,
Transform::from_xyz(-10.5, 1.7, -1.0).looking_at(Vec3::new(0.0, 3.5, 0.0), Vec3::Y),
Projection::Perspective(PerspectiveProjection {
fov: std::f32::consts::PI / 3.0,
near: 0.1,
far: 1000.0,
aspect_ratio: 1.0,
..Default::default()
}),
EnvironmentMapLight {
diffuse_map: asset_server.load("environment_maps/san_giuseppe_bridge_4k_diffuse.ktx2"),
specular_map: asset_server
.load("environment_maps/san_giuseppe_bridge_4k_specular.ktx2"),
intensity: 600.0,
..default()
},
FreeCamera::default(),
Spin,
));
cam.insert_if(DepthPrepass, || args.deferred)
.insert_if(DeferredPrepass, || args.deferred)
.insert_if(OcclusionCulling, || !args.no_view_occlusion_culling)
.insert_if(NoFrustumCulling, || args.no_frustum_culling)
.insert_if(NoAutomaticBatching, || args.no_automatic_batching)
.insert_if(NoIndirectDrawing, || args.no_indirect_drawing)
.insert_if(NoCpuCulling, || args.no_cpu_culling);
if !args.minimal {
cam.insert((
Bloom {
intensity: 0.02,
..default()
},
TemporalAntiAliasing::default(),
))
.insert(ScreenSpaceAmbientOcclusion::default());
}
if !args.hide_frame_time {
commands
.spawn((
Node {
left: Val::Px(1.5),
top: Val::Px(1.5),
..default()
},
GlobalZIndex(-1),
))
.with_children(|parent| {
parent.spawn((Text::new(""), TextColor(Color::BLACK), FrameTimeText));
});
commands.spawn(Node::default()).with_children(|parent| {
parent.spawn((Text::new(""), TextColor(Color::WHITE), FrameTimeText));
});
}
}
pub fn all_children<F: FnMut(Entity)>(
children: &Children,
children_query: &Query<&Children>,
closure: &mut F,
) {
for child in children {
if let Ok(children) = children_query.get(*child) {
all_children(children, children_query, closure);
}
closure(*child);
}
}
#[allow(clippy::type_complexity, clippy::too_many_arguments)]
pub fn proc_scene(
scene_ready: On<SceneInstanceReady>,
mut commands: Commands,
children: Query<&Children>,
has_std_mat: Query<&MeshMaterial3d<StandardMaterial>>,
mut materials: ResMut<Assets<StandardMaterial>>,
lights: Query<Entity, Or<(With<PointLight>, With<DirectionalLight>, With<SpotLight>)>>,
cameras: Query<Entity, With<Camera>>,
args: Res<Args>,
) {
for entity in children.iter_descendants(scene_ready.entity) {
// Sponza needs flipped normals
if let Ok(mat_h) = has_std_mat.get(entity)
&& let Some(mat) = materials.get_mut(mat_h)
{
mat.flip_normal_map_y = true;
match mat.alpha_mode {
AlphaMode::Mask(_) => {
mat.diffuse_transmission = 0.6;
mat.double_sided = true;
mat.cull_mode = None;
mat.thickness = 0.2;
commands.entity(entity).insert(TransmittedShadowReceiver);
}
AlphaMode::Opaque => {
mat.double_sided = false;
mat.cull_mode = Some(Face::Back);
}
_ => (),
}
}
if args.no_gltf_lights {
// Has a bunch of lights by default
if lights.get(entity).is_ok() {
commands.entity(entity).despawn();
}
}
// Has a bunch of cameras by default
if cameras.get(entity).is_ok() {
commands.entity(entity).despawn();
}
}
}
#[derive(Resource, Deref, DerefMut)]
struct CameraPositions([Transform; 3]);
impl Default for CameraPositions {
fn default() -> Self {
Self([
Transform {
translation: Vec3::new(-10.5, 1.7, -1.0),
rotation: Quat::from_array([-0.05678932, 0.7372272, -0.062454797, -0.670351]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(56.23809, 2.9985719, 28.96291),
rotation: Quat::from_array([0.0020175162, 0.35272083, -0.0007605003, 0.93572617]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(5.7861176, 3.3475509, -8.821455),
rotation: Quat::from_array([-0.0049382094, -0.98193514, -0.025878597, 0.18737496]),
scale: Vec3::ONE,
},
])
}
}
const ANIM_SPEED: f32 = 0.2;
const ANIM_HYSTERESIS: f32 = 0.1; // EMA/LPF
const ANIM_CAM: [Transform; 3] = [
Transform {
translation: Vec3::new(-6.414026, 8.179898, -23.550516),
rotation: Quat::from_array([-0.016413536, -0.88136566, -0.030704278, 0.4711502]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(-14.752817, 6.279289, 5.691277),
rotation: Quat::from_array([-0.031593435, -0.516736, -0.019086324, 0.8553488]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(5.1539426, 8.142523, 16.436222),
rotation: Quat::from_array([-0.07907656, -0.07581916, -0.006031934, 0.99396276]),
scale: Vec3::ONE,
},
];
fn input(
input: Res<ButtonInput<KeyCode>>,
mut camera: Query<&mut Transform, With<Camera>>,
positions: Res<CameraPositions>,
) {
let Ok(mut transform) = camera.single_mut() else {
return;
};
if input.just_pressed(KeyCode::KeyI) {
info!("{:?}", transform);
}
if input.just_pressed(KeyCode::Digit1) {
*transform = positions[0]
}
if input.just_pressed(KeyCode::Digit2) {
*transform = positions[1]
}
if input.just_pressed(KeyCode::Digit3) {
*transform = positions[2]
}
}
fn lerp<T>(a: T, b: T, t: f32) -> T
where
T: Copy + Add<Output = T> + Sub<Output = T> + Mul<f32, Output = T>,
{
a + (b - a) * t
}
fn follow_path(points: &[Transform], progress: f32) -> Transform {
let total_segments = (points.len() - 1) as f32;
let progress = progress.clamp(0.0, 1.0);
let mut segment_progress = progress * total_segments;
let segment_index = segment_progress.floor() as usize;
segment_progress -= segment_index as f32;
let a = points[segment_index];
let b = points[(segment_index + 1).min(points.len() - 1)];
Transform {
translation: lerp(a.translation, b.translation, segment_progress),
rotation: lerp(a.rotation, b.rotation, segment_progress),
scale: lerp(a.scale, b.scale, segment_progress),
}
}
fn run_animation(
time: Res<Time>,
input: Res<ButtonInput<KeyCode>>,
mut animation_active: Local<bool>,
mut camera: Query<&mut Transform, With<Camera>>,
) {
let Ok(mut cam_tr) = camera.single_mut() else {
return;
};
if input.just_pressed(KeyCode::Space) {
*animation_active = !*animation_active;
}
if !*animation_active {
return;
}
let progress = (time.elapsed_secs() * ANIM_SPEED).fract();
let cycle = 1.0 - (progress * 2.0 - 1.0).abs();
let path_state = follow_path(&ANIM_CAM, cycle);
cam_tr.translation = lerp(cam_tr.translation, path_state.translation, ANIM_HYSTERESIS);
cam_tr.rotation = lerp(cam_tr.rotation, path_state.rotation, ANIM_HYSTERESIS).normalize();
}
fn spin(
camera: Single<Entity, With<Camera>>,
mut things_to_spin: Query<&mut Transform, With<Spin>>,
time: Res<Time>,
args: Res<Args>,
mut positions: ResMut<CameraPositions>,
) {
if args.spin {
let camera_position = things_to_spin.get(*camera).unwrap().translation;
let spin = |thing_to_spin: &mut Transform| {
thing_to_spin.rotate_around(camera_position, Quat::from_rotation_y(time.delta_secs()));
};
things_to_spin.iter_mut().for_each(|mut s| spin(s.as_mut())); // WHY
positions.iter_mut().for_each(spin);
}
}
#[allow(clippy::too_many_arguments)]
fn benchmark(
input: Res<ButtonInput<KeyCode>>,
mut camera_transform: Single<&mut Transform, With<Camera>>,
materials: Res<Assets<StandardMaterial>>,
meshes: Res<Assets<Mesh>>,
has_std_mat: Query<&MeshMaterial3d<StandardMaterial>>,
has_mesh: Query<&Mesh3d>,
mut bench_started: Local<Option<Instant>>,
mut bench_frame: Local<u32>,
mut count_per_step: Local<u32>,
time: Res<Time>,
positions: Res<CameraPositions>,
mut low_high: ResMut<FrameLowHigh>,
) {
if input.just_pressed(KeyCode::KeyB) && bench_started.is_none() {
low_high.bench_reset();
*bench_started = Some(Instant::now());
*bench_frame = 0;
// Try to render for around 3s or at least 60 frames per step
*count_per_step = ((3.0 / time.delta_secs()) as u32).max(60);
println!(
"Starting Benchmark with {} frames per step",
*count_per_step
);
}
if bench_started.is_none() {
return;
}
if *bench_frame == 0 {
**camera_transform = positions[0]
} else if *bench_frame == *count_per_step {
**camera_transform = positions[1]
} else if *bench_frame == *count_per_step * 2 {
**camera_transform = positions[2]
} else if *bench_frame == *count_per_step * 3 {
let elapsed = bench_started.unwrap().elapsed().as_secs_f32();
println!(
"{:>7.2}ms Benchmark avg cpu frame time",
(elapsed / *bench_frame as f32) * 1000.0
);
let r = 1.0 / *bench_frame as f64;
println!("{:>7.2}ms avg 1% low", low_high.sum_one_percent_low * r);
println!("{:>7.2}ms avg 1% high", low_high.sum_one_percent_high * r);
println!(
"{:>7} Meshes\n{:>7} Mesh Instances\n{:>7} Materials\n{:>7} Material Instances",
meshes.len(),
has_mesh.iter().len(),
materials.len(),
has_std_mat.iter().len(),
);
*bench_started = None;
*bench_frame = 0;
**camera_transform = positions[0];
}
*bench_frame += 1;
low_high.bench_step();
}
pub fn add_no_frustum_culling(
mut commands: Commands,
convert_query: Query<
Entity,
(
Without<NoFrustumCulling>,
With<MeshMaterial3d<StandardMaterial>>,
),
>,
) {
for entity in convert_query.iter() {
commands.entity(entity).insert(NoFrustumCulling);
}
}
#[derive(Resource, Default)]
struct FrameLowHigh {
one_percent_low: f64,
one_percent_high: f64,
sum_one_percent_low: f64,
sum_one_percent_high: f64,
}
impl FrameLowHigh {
fn bench_reset(&mut self) {
self.sum_one_percent_high = 0.0;
self.sum_one_percent_low = 0.0;
}
fn bench_step(&mut self) {
self.sum_one_percent_high += self.one_percent_high;
self.sum_one_percent_low += self.one_percent_low;
}
}
fn frame_time_system(
diagnostics: Res<DiagnosticsStore>,
mut text: Query<&mut Text, With<FrameTimeText>>,
mut measurements: Local<Vec<f64>>,
mut low_high: ResMut<FrameLowHigh>,
) {
if let Some(frame_time) = diagnostics.get(&FrameTimeDiagnosticsPlugin::FRAME_TIME) {
let mut string = format!(
"\n{:>7.2}ms ema\n{:>7.2}ms sma\n",
frame_time.smoothed().unwrap_or_default(),
frame_time.average().unwrap_or_default()
);
if frame_time.history_len() >= 100 {
measurements.clear();
measurements.extend(frame_time.measurements().map(|t| t.value));
measurements.sort_by(|a, b| a.partial_cmp(b).unwrap());
let count = measurements.len() / 100;
low_high.one_percent_low = measurements.iter().take(count).sum::<f64>() / count as f64;
low_high.one_percent_high =
measurements.iter().rev().take(count).sum::<f64>() / count as f64;
string.push_str(&format!(
"{:>7.2}ms 1% low\n{:>7.2}ms 1% high\n",
low_high.one_percent_low, low_high.one_percent_high
));
}
for mut t in &mut text {
t.0 = string.clone();
}
};
}
@@ -0,0 +1,15 @@
[package]
name = "caldera_hotel"
version = "0.1.0"
edition = "2024"
publish = false
license = "MIT OR Apache-2.0"
[dependencies]
bevy = { path = "../../../", features = [
"bevy_camera_controller",
"free_camera",
] }
mipmap_generator = { path = "../mipmap_generator" }
argh = "0.1.12"
@@ -0,0 +1,42 @@
# Caldera Hotel 01 Example
Currently only setup to load `hotel_01`.
Download scene from [Activision github repo](https://github.com/Activision/caldera).
Reexport `map_source/prefabs/br/wz_vg/mp_wz_island/commercial/hotel_01.usd` as `hotel_01.glb`
[Alternate processed files applied animation base poses (glTF files on discord)](https://discord.com/channels/691052431525675048/1159383661062389790/1283123002346705018)
When importing the USD file into blender, for `Object Types`, only select `Meshes`
Note: many of the meshes in the original scene use an animation base pose to position the object. Consider applying these transformations before exporting from blender.
Press 1, 2, or 3 for various camera locations. Press B for benchmark (see console for results).
```console
Options:
--minimal disable bloom, AO, AA, shadows
--random-materials
assign randomly generated materials to each unique mesh
(mesh instances also share materials)
--texture-count quantity of unique textures sets to randomly select from. (A
texture set being: base_color, roughness)
--count quantity of hotel 01 models
--deferred use deferred shading
--no-frustum-culling
disable all frustum culling. Stresses queuing and batching
as all mesh material entities in the scene are always drawn.
--no-automatic-batching
disable automatic batching. Skips batching resulting in
heavy stress on render pass draw command encoding.
--no-view-occlusion-culling
disable gpu occlusion culling for the camera
--no-shadow-occlusion-culling
disable gpu occlusion culling for the directional light
--no-indirect-drawing
disable indirect drawing.
--no-cpu-culling disable CPU culling.
--spin spin the bistros and camera
--hide-frame-time don't show frame time
--help, help display usage information
```
@@ -0,0 +1,609 @@
// Press B for benchmark.
// Preferably after frame time is reading consistently, rust-analyzer has calmed down, and with locked gpu clocks.
use std::{f32::consts::PI, time::Instant};
use argh::FromArgs;
use bevy::{
anti_alias::taa::TemporalAntiAliasing,
camera::visibility::{NoCpuCulling, NoFrustumCulling},
camera_controller::free_camera::{FreeCamera, FreeCameraPlugin},
core_pipeline::prepass::{DeferredPrepass, DepthPrepass},
diagnostic::{DiagnosticsStore, FrameTimeDiagnosticsPlugin},
image::{ImageAddressMode, ImageSampler, ImageSamplerDescriptor},
light::{CascadeShadowConfig, CascadeShadowConfigBuilder},
pbr::{DefaultOpaqueRendererMethod, ScreenSpaceAmbientOcclusion},
post_process::bloom::Bloom,
prelude::*,
render::{
batching::NoAutomaticBatching,
experimental::occlusion_culling::OcclusionCulling,
render_resource::{
Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
},
view::{Hdr, NoIndirectDrawing},
},
scene::SceneInstanceReady,
window::{PresentMode, WindowResolution},
winit::{UpdateMode, WinitSettings},
};
use crate::light_consts::lux;
#[derive(FromArgs, Resource, Clone)]
/// Config
pub struct Args {
/// disable bloom, AO, AA, shadows
#[argh(switch)]
minimal: bool,
/// assign randomly generated materials to each unique mesh (mesh instances also share materials)
#[argh(switch)]
random_materials: bool,
/// quantity of unique textures sets to randomly select from. (A texture set being: base_color, roughness)
#[argh(option, default = "0")]
texture_count: u32,
/// quantity of hotel 01 models
#[argh(option, default = "1")]
count: u32,
/// use deferred shading
#[argh(switch)]
deferred: bool,
/// disable all frustum culling. Stresses queuing and batching as all mesh material entities in the scene are always drawn.
#[argh(switch)]
no_frustum_culling: bool,
/// disable automatic batching. Skips batching resulting in heavy stress on render pass draw command encoding.
#[argh(switch)]
no_automatic_batching: bool,
/// disable gpu occlusion culling for the camera
#[argh(switch)]
no_view_occlusion_culling: bool,
/// disable gpu occlusion culling for the directional light
#[argh(switch)]
no_shadow_occlusion_culling: bool,
/// disable indirect drawing.
#[argh(switch)]
no_indirect_drawing: bool,
/// disable CPU culling.
#[argh(switch)]
no_cpu_culling: bool,
/// spin the bistros and camera
#[argh(switch)]
spin: bool,
/// don't show frame time
#[argh(switch)]
hide_frame_time: bool,
}
pub fn main() {
let args: Args = argh::from_env();
let mut app = App::new();
app.init_resource::<CameraPositions>()
.init_resource::<FrameLowHigh>()
.insert_resource(args.clone())
.insert_resource(WinitSettings {
focused_mode: UpdateMode::Continuous,
unfocused_mode: UpdateMode::Continuous,
})
.add_plugins(DefaultPlugins.set(WindowPlugin {
primary_window: Some(Window {
present_mode: PresentMode::Immediate,
resolution: WindowResolution::new(1920, 1080).with_scale_factor_override(1.0),
..default()
}),
..default()
}))
.add_plugins((
FrameTimeDiagnosticsPlugin {
max_history_length: 1000,
..default()
},
FreeCameraPlugin,
))
.add_systems(Startup, setup)
.add_systems(Update, (input, spin, frame_time_system, benchmark).chain());
if args.no_frustum_culling {
app.add_systems(Update, add_no_frustum_culling);
}
if args.deferred {
app.insert_resource(DefaultOpaqueRendererMethod::deferred());
}
app.run();
}
#[derive(Component)]
pub struct Spin;
#[derive(Component)]
struct FrameTimeText;
#[derive(Component)]
pub struct PostProcScene;
pub fn setup(
mut commands: Commands,
asset_server: Res<AssetServer>,
args: Res<Args>,
positions: Res<CameraPositions>,
) {
let hotel_01 = asset_server.load("hotel_01.glb#Scene0");
commands
.spawn((
SceneRoot(hotel_01.clone()),
Transform::from_scale(Vec3::splat(0.01)),
PostProcScene,
Spin,
))
.observe(assign_rng_materials);
let mut count = 0;
if args.count > 1 {
let quantity = args.count - 1;
let side = (quantity as f32).sqrt().ceil() as i32 / 2;
'outer: for x in -side..=side {
for z in -side..=side {
if count >= quantity {
break 'outer;
}
if x == 0 && z == 0 {
continue;
}
commands.spawn((
SceneRoot(hotel_01.clone()),
Transform::from_xyz(x as f32 * 50.0, 0.0, z as f32 * 50.0)
.with_scale(Vec3::splat(0.01)),
Spin,
));
count += 1;
}
}
}
// Sun
commands
.spawn((
Transform::from_rotation(Quat::from_euler(EulerRot::XYZ, PI * -0.35, PI * -0.13, 0.0)),
DirectionalLight {
color: Color::srgb(1.0, 0.87, 0.78),
illuminance: lux::FULL_DAYLIGHT,
shadows_enabled: !args.minimal,
shadow_depth_bias: 0.2,
shadow_normal_bias: 0.2,
..default()
},
CascadeShadowConfig::from(CascadeShadowConfigBuilder {
num_cascades: 3,
minimum_distance: 0.1,
maximum_distance: 80.0,
first_cascade_far_bound: 5.0,
overlap_proportion: 0.2,
}),
))
.insert_if(OcclusionCulling, || !args.no_shadow_occlusion_culling);
// Camera
let mut cam = commands.spawn((
Msaa::Off,
Camera3d::default(),
Hdr,
positions[0],
Projection::Perspective(PerspectiveProjection {
fov: std::f32::consts::PI / 3.0,
near: 0.1,
far: 1000.0,
..Default::default()
}),
EnvironmentMapLight {
diffuse_map: asset_server.load("environment_maps/pisa_diffuse_rgb9e5_zstd.ktx2"),
specular_map: asset_server.load("environment_maps/pisa_specular_rgb9e5_zstd.ktx2"),
intensity: 1000.0,
..default()
},
FreeCamera::default(),
Spin,
));
cam.insert_if(DepthPrepass, || args.deferred)
.insert_if(DeferredPrepass, || args.deferred)
.insert_if(OcclusionCulling, || !args.no_view_occlusion_culling)
.insert_if(NoFrustumCulling, || args.no_frustum_culling)
.insert_if(NoAutomaticBatching, || args.no_automatic_batching)
.insert_if(NoIndirectDrawing, || args.no_indirect_drawing)
.insert_if(NoCpuCulling, || args.no_cpu_culling);
if !args.minimal {
cam.insert((
Bloom {
intensity: 0.02,
..default()
},
TemporalAntiAliasing::default(),
))
.insert(ScreenSpaceAmbientOcclusion::default());
}
if !args.hide_frame_time {
commands
.spawn((
Node {
left: Val::Px(1.5),
top: Val::Px(1.5),
..default()
},
GlobalZIndex(-1),
))
.with_children(|parent| {
parent.spawn((Text::new(""), TextColor(Color::BLACK), FrameTimeText));
});
commands.spawn(Node::default()).with_children(|parent| {
parent.spawn((Text::new(""), TextColor(Color::WHITE), FrameTimeText));
});
}
}
// Go though each unique mesh and randomly generate a material.
// Each unique so instances are maintained.
#[allow(clippy::too_many_arguments)]
pub fn assign_rng_materials(
scene_ready: On<SceneInstanceReady>,
mut commands: Commands,
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
meshes: Res<Assets<Mesh>>,
mesh_instances: Query<(Entity, &Mesh3d)>,
args: Res<Args>,
asset_server: Res<AssetServer>,
scenes: Query<&SceneRoot>,
) {
if !args.random_materials {
return;
}
let Ok(scene) = scenes.get(scene_ready.entity) else {
return;
};
let scene_loaded = asset_server
.get_recursive_dependency_load_state(&scene.0)
.map(|state| state.is_loaded())
.unwrap_or(false);
if !scene_loaded {
warn!("get_recursive_dependency_load_state not finished!");
}
const MESH_INSTANCE_QTY: usize = 35689;
if MESH_INSTANCE_QTY != mesh_instances.iter().len() {
warn!(
"Mesh quantity appears incorrect. Expected: {}. Found: {}!",
MESH_INSTANCE_QTY,
mesh_instances.iter().len()
)
}
let base_color_textures = (0..args.texture_count)
.map(|i| {
images.add(generate_random_compressed_texture_with_mipmaps(
2048, false, i,
))
})
.collect::<Vec<_>>();
let roughness_textures = (0..args.texture_count)
.map(|i| {
images.add(generate_random_compressed_texture_with_mipmaps(
2048,
false, // Using bc4 here seems to not work
i + 2048,
))
})
.collect::<Vec<_>>();
for (i, (mesh_h, _mesh)) in meshes.iter().enumerate() {
let mut base_color_texture = None;
let mut roughness_texture = None;
if !base_color_textures.is_empty() {
base_color_texture = Some(base_color_textures[i % base_color_textures.len()].clone());
}
if !roughness_textures.is_empty() {
roughness_texture = Some(roughness_textures[i % roughness_textures.len()].clone());
}
let unique_material = materials.add(StandardMaterial {
base_color: Color::srgb(
hash_noise(i as u32, 0, 0),
hash_noise(i as u32, 0, 1),
hash_noise(i as u32, 0, 2),
),
base_color_texture,
metallic_roughness_texture: roughness_texture,
..default()
});
for (entity, mesh_instance_h) in mesh_instances.iter() {
if mesh_instance_h.id() == mesh_h {
commands
.entity(entity)
.insert(MeshMaterial3d::from(unique_material.clone()));
}
}
}
}
fn generate_random_compressed_texture_with_mipmaps(size: u32, bc4: bool, seed: u32) -> Image {
let (bytes, mip_count) = calculate_bcn_image_size_with_mips(size, if bc4 { 8 } else { 16 });
let data = (0..bytes).map(|i| uhash(i, seed) as u8).collect::<Vec<_>>();
Image {
texture_descriptor: TextureDescriptor {
label: None,
size: Extent3d {
width: size,
height: size,
..default()
},
dimension: TextureDimension::D2,
format: if bc4 {
TextureFormat::Bc4RUnorm
} else {
TextureFormat::Bc7RgbaUnormSrgb
},
mip_level_count: mip_count,
sample_count: 1,
usage: TextureUsages::TEXTURE_BINDING | TextureUsages::COPY_DST,
view_formats: &[],
},
sampler: ImageSampler::Descriptor(ImageSamplerDescriptor {
address_mode_u: ImageAddressMode::Repeat,
address_mode_v: ImageAddressMode::Repeat,
..default()
}),
data: Some(data),
..Default::default()
}
}
#[derive(Resource, Deref, DerefMut)]
pub struct CameraPositions([Transform; 3]);
impl Default for CameraPositions {
fn default() -> Self {
Self([
Transform {
translation: Vec3::new(-20.147331, 16.818098, 42.806145),
rotation: Quat::from_array([-0.22917402, -0.34915298, -0.08848568, 0.9042908]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(1.6168646, 1.8304176, -5.846825),
rotation: Quat::from_array([-0.0007061247, -0.99179053, 0.12775362, -0.005481863]),
scale: Vec3::ONE,
},
Transform {
translation: Vec3::new(23.97184, 1.8938808, 30.568554),
rotation: Quat::from_array([-0.0013945175, 0.4685419, 0.00073959737, 0.8834399]),
scale: Vec3::ONE,
},
])
}
}
fn input(
input: Res<ButtonInput<KeyCode>>,
mut camera: Query<&mut Transform, With<Camera>>,
positions: Res<CameraPositions>,
) {
let Ok(mut transform) = camera.single_mut() else {
return;
};
if input.just_pressed(KeyCode::KeyI) {
info!("{:?}", transform);
}
if input.just_pressed(KeyCode::Digit1) {
*transform = positions[0]
}
if input.just_pressed(KeyCode::Digit2) {
*transform = positions[1]
}
if input.just_pressed(KeyCode::Digit3) {
*transform = positions[2]
}
}
fn spin(
camera: Single<Entity, With<Camera>>,
mut things_to_spin: Query<&mut Transform, With<Spin>>,
time: Res<Time>,
args: Res<Args>,
mut positions: ResMut<CameraPositions>,
) {
if args.spin {
let camera_position = things_to_spin.get(*camera).unwrap().translation;
let spin = |thing_to_spin: &mut Transform| {
thing_to_spin.rotate_around(camera_position, Quat::from_rotation_y(time.delta_secs()));
};
things_to_spin.iter_mut().for_each(|mut s| spin(s.as_mut())); // WHY
positions.iter_mut().for_each(spin);
}
}
#[allow(clippy::too_many_arguments)]
fn benchmark(
input: Res<ButtonInput<KeyCode>>,
mut camera_transform: Single<&mut Transform, With<Camera>>,
materials: Res<Assets<StandardMaterial>>,
meshes: Res<Assets<Mesh>>,
has_std_mat: Query<&MeshMaterial3d<StandardMaterial>>,
has_mesh: Query<&Mesh3d>,
mut bench_started: Local<Option<Instant>>,
mut bench_frame: Local<u32>,
mut count_per_step: Local<u32>,
time: Res<Time>,
positions: Res<CameraPositions>,
mut low_high: ResMut<FrameLowHigh>,
) {
if input.just_pressed(KeyCode::KeyB) && bench_started.is_none() {
low_high.bench_reset();
*bench_started = Some(Instant::now());
*bench_frame = 0;
// Try to render for around 3s or at least 60 frames per step
*count_per_step = ((3.0 / time.delta_secs()) as u32).max(60);
println!(
"Starting Benchmark with {} frames per step",
*count_per_step
);
}
if bench_started.is_none() {
return;
}
if *bench_frame == 0 {
**camera_transform = positions[0]
} else if *bench_frame == *count_per_step {
**camera_transform = positions[1]
} else if *bench_frame == *count_per_step * 2 {
**camera_transform = positions[2]
} else if *bench_frame == *count_per_step * 3 {
let elapsed = bench_started.unwrap().elapsed().as_secs_f32();
println!(
"{:>7.2}ms Benchmark avg cpu frame time",
(elapsed / *bench_frame as f32) * 1000.0
);
let r = 1.0 / *bench_frame as f64;
println!("{:>7.2}ms avg 1% low", low_high.sum_one_percent_low * r);
println!("{:>7.2}ms avg 1% high", low_high.sum_one_percent_high * r);
println!(
"{:>7} Meshes\n{:>7} Mesh Instances\n{:>7} Materials\n{:>7} Material Instances",
meshes.len(),
has_mesh.iter().len(),
materials.len(),
has_std_mat.iter().len(),
);
*bench_started = None;
*bench_frame = 0;
**camera_transform = positions[0];
}
*bench_frame += 1;
low_high.bench_step();
}
pub fn add_no_frustum_culling(
mut commands: Commands,
convert_query: Query<
Entity,
(
Without<NoFrustumCulling>,
With<MeshMaterial3d<StandardMaterial>>,
),
>,
) {
for entity in convert_query.iter() {
commands.entity(entity).insert(NoFrustumCulling);
}
}
#[inline(always)]
pub fn uhash(a: u32, b: u32) -> u32 {
let mut x = (a.overflowing_mul(1597334673).0) ^ (b.overflowing_mul(3812015801).0);
// from https://nullprogram.com/blog/2018/07/31/
x = x ^ (x >> 16);
x = x.overflowing_mul(0x7feb352d).0;
x = x ^ (x >> 15);
x = x.overflowing_mul(0x846ca68b).0;
x = x ^ (x >> 16);
x
}
#[inline(always)]
pub fn unormf(n: u32) -> f32 {
n as f32 * (1.0 / 0xffffffffu32 as f32)
}
#[inline(always)]
pub fn hash_noise(x: u32, y: u32, z: u32) -> f32 {
let urnd = uhash(x, (y << 11) + z);
unormf(urnd)
}
// BC7 block is 16 bytes, BC4 block is 8 bytes
fn calculate_bcn_image_size_with_mips(size: u32, block_size: u32) -> (u32, u32) {
let mut total_size = 0;
let mut mip_size = size;
let mut mip_count = 0;
while mip_size > 4 {
mip_count += 1;
let num_blocks = mip_size / 4; // Round up
let mip_level_size = num_blocks * num_blocks * block_size;
total_size += mip_level_size;
mip_size = (mip_size / 2).max(1);
}
(total_size, mip_count.max(1))
}
#[derive(Resource, Default)]
struct FrameLowHigh {
one_percent_low: f64,
one_percent_high: f64,
sum_one_percent_low: f64,
sum_one_percent_high: f64,
}
impl FrameLowHigh {
fn bench_reset(&mut self) {
self.sum_one_percent_high = 0.0;
self.sum_one_percent_low = 0.0;
}
fn bench_step(&mut self) {
self.sum_one_percent_high += self.one_percent_high;
self.sum_one_percent_low += self.one_percent_low;
}
}
fn frame_time_system(
diagnostics: Res<DiagnosticsStore>,
mut text: Query<&mut Text, With<FrameTimeText>>,
mut measurements: Local<Vec<f64>>,
mut low_high: ResMut<FrameLowHigh>,
) {
if let Some(frame_time) = diagnostics.get(&FrameTimeDiagnosticsPlugin::FRAME_TIME) {
let mut string = format!(
"\n{:>7.2}ms ema\n{:>7.2}ms sma\n",
frame_time.smoothed().unwrap_or_default(),
frame_time.average().unwrap_or_default()
);
if frame_time.history_len() >= 100 {
measurements.clear();
measurements.extend(frame_time.measurements().map(|t| t.value));
measurements.sort_by(|a, b| a.partial_cmp(b).unwrap());
let count = measurements.len() / 100;
low_high.one_percent_low = measurements.iter().take(count).sum::<f64>() / count as f64;
low_high.one_percent_high =
measurements.iter().rev().take(count).sum::<f64>() / count as f64;
string.push_str(&format!(
"{:>7.2}ms 1% low\n{:>7.2}ms 1% high\n",
low_high.one_percent_low, low_high.one_percent_high
));
}
for mut t in &mut text {
t.0 = string.clone();
}
};
}
@@ -0,0 +1,40 @@
[package]
name = "mipmap_generator"
version = "0.1.0"
edition = "2024"
publish = false
license = "MIT OR Apache-2.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = "1"
bevy = { path = "../../../", default-features = false, features = [
"bevy_asset",
"bevy_scene",
"bevy_pbr",
"ktx2",
"jpeg",
"multi_threaded",
] }
image = { version = "0.25.2", default-features = false }
fast_image_resize = { version = "5.4", features = ["image"] }
futures-lite = "2.0.1"
tracing = "0.1"
intel_tex_2 = { version = "0.4.0", optional = true }
zstd = { version = "0.13.2", optional = true }
[dev-dependencies]
bevy = { path = "../../../" }
argh = "0.1.12"
[features]
default = ["debug_text", "bevy_zstd_rust"]
compress = ["dep:intel_tex_2", "dep:zstd"]
debug_text = ["bevy/bevy_ui"]
pbr_transmission_textures = ["bevy/pbr_transmission_textures"]
pbr_multi_layer_material_textures = ["bevy/pbr_multi_layer_material_textures"]
pbr_anisotropy_texture = ["bevy/pbr_anisotropy_texture"]
pbr_specular_textures = ["bevy/pbr_specular_textures"]
bevy_zstd_rust = ["bevy/zstd_rust"]
bevy_zstd_c = ["bevy/zstd_c"]
@@ -0,0 +1,96 @@
# mipmap_generator
Optionally use the `compress` feature and corresponding setting in `MipmapGeneratorSettings` to enable BCn compression. Note: Compression can take a long time depending on the quantity and resolution of the images.
Currently supported conversions:
- R8Unorm -> Bc4RUnorm
- Rg8Unorm -> Bc5RgUnorm
- Rgba8Unorm -> Bc7RgbaUnorm
- Rgba8UnormSrgb -> Bc7RgbaUnormSrgb
Optionally set `compressed_image_data_cache_path` in `MipmapGeneratorSettings` to cache raw compressed image data on disk. Only textures that are BCn compressed will be stored.
Test loading a gLTF, computing mips with texture compression, and caching compressed image data on disk:
`cargo run -p mipmap_generator --features compress --release --example load_gltf -- --compress --cache`
## Note
Bevy supports a [variety of compressed image formats](https://docs.rs/bevy/latest/bevy/render/texture/enum.ImageFormat.html) that can also contain mipmaps. This plugin is intended for situations where the use of those formats is impractical (mostly prototyping/testing). With this plugin, mipmap generation happens slowly on the cpu.
Instead of using this plugin, consider using the new [CompressedImageSaver](https://bevyengine.org/news/bevy-0-12/#compressedimagesaver).
For generating compressed textures ahead of time also check out:
- [klafsa](https://github.com/superdump/klafsa)
- [kram](https://github.com/alecazam/kram)
- [toktx](https://github.khronos.org/KTX-Software/ktxtools/toktx.html)
- [compressonator](https://gpuopen.com/compressonator/)
- [basis_universal](https://github.com/BinomialLLC/basis_universal)
In my experience, many of these compressed formats can be used with bevy in `gltf` files. This can be done by converting and replacing the images included in the `gltf` and then setting the mimeType with something like: `"mimeType": "image/ktx2"` (for ktx2)
## Usage
```rust
.add_plugins(DefaultPlugins)
// Add MipmapGeneratorPlugin after default plugins
.add_plugin(MipmapGeneratorPlugin)
// Add material types to be converted
.add_systems(Update, generate_mipmaps::<StandardMaterial>)
```
When materials are created, mipmaps will be created for the images used in the material.
Mipmaps will not be generated for materials found on entities that also have the `NoMipmapGeneration` component.
## Custom Materials
For use with custom materials, just implement the GetImages trait for the custom material.
```rust
pub trait GetImages {
fn get_images(&self) -> Vec<&Handle<Image>>;
}
impl<T: GetImages + MaterialExtension> GetImages for ExtendedMaterial<StandardMaterial, T> {
fn get_images(&self) -> Vec<&Handle<Image>> {
let mut images: Vec<&Handle<Image>> = vec![
&self.base.base_color_texture,
&self.base.emissive_texture,
&self.base.metallic_roughness_texture,
&self.base.normal_map_texture,
&self.base.occlusion_texture,
&self.base.depth_map,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.diffuse_transmission_texture,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.specular_transmission_texture,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.thickness_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_roughness_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_normal_texture,
#[cfg(feature = "pbr_anisotropy_texture")]
&self.base.anisotropy_texture,
#[cfg(feature = "pbr_specular_textures")]
&self.base.specular_texture,
#[cfg(feature = "pbr_specular_textures")]
&self.base.specular_tint_texture,
]
.into_iter()
.flatten()
.collect();
images.append(&mut self.extension.get_images());
images
}
}
```
## TODO
- Support more texture formats.
- Support re-running if images are updated.
@@ -0,0 +1,77 @@
//! Loads and renders a glTF file as a scene with run-time generated mip maps and optional texture compression.
use std::path::PathBuf;
use argh::FromArgs;
use bevy::{asset::UnapprovedPathMode, prelude::*};
use mipmap_generator::{
generate_mipmaps, MipmapGeneratorDebugTextPlugin, MipmapGeneratorPlugin,
MipmapGeneratorSettings,
};
#[derive(FromArgs, Resource, Clone)]
/// Config
pub struct Args {
/// compress textures (requires compress feature)
#[argh(switch)]
compress: bool,
/// if set, raw compressed image data will be cached in this directory. Images that are not BCn compressed are not cached.
#[argh(switch)]
cache: bool,
/// if low_quality is set, only 0.5 byte/px formats will be used (BC1, BC4) unless the alpha channel is in use, then BC3 will be used. When low quality is set, compression is generally faster than CompressionSpeed::UltraFast and CompressionSpeed is ignored.
#[argh(switch)]
low_quality: bool,
}
fn main() {
let args: Args = argh::from_env();
App::new()
.insert_resource(ClearColor(Color::srgb(0.1, 0.1, 0.1)))
.add_plugins(DefaultPlugins.set(AssetPlugin {
// Needed to load from the parent bevy assets folder
unapproved_path_mode: UnapprovedPathMode::Allow,
..default()
}))
.insert_resource(MipmapGeneratorSettings {
// Manually setting anisotropic filtering to 16x
anisotropic_filtering: 16,
compression: args.compress.then(Default::default),
compressed_image_data_cache_path: if args.cache {
Some(PathBuf::from("compressed_texture_cache"))
} else {
None
},
low_quality: args.low_quality,
..default()
})
.add_systems(Startup, setup)
// Add MipmapGeneratorPlugin after default plugins
.add_plugins((MipmapGeneratorPlugin, MipmapGeneratorDebugTextPlugin))
// Add material types to be converted
.add_systems(Update, generate_mipmaps::<StandardMaterial>)
.run();
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn((
Camera3d::default(),
Transform::from_xyz(1.0, 0.2, 1.0).looking_at(Vec3::new(0.0, 0.3, 0.0), Vec3::Y),
));
commands.spawn((
PointLight {
shadows_enabled: true,
..default()
},
Transform::from_xyz(-1.0, 2.0, -3.0),
));
commands.spawn(SceneRoot(
asset_server.load(
// This seems to be the correct path but bevy doesn't resolve it.
GltfAssetLabel::Scene(0)
.from_asset("../../../../assets/models/FlightHelmet/FlightHelmet.gltf"),
),
));
}
@@ -0,0 +1,142 @@
use std::{f32::consts::PI, path::PathBuf};
use argh::FromArgs;
use bevy::{
prelude::*,
render::render_resource::{
Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
},
};
use mipmap_generator::{
generate_mipmaps, MipmapGeneratorDebugTextPlugin, MipmapGeneratorPlugin,
MipmapGeneratorSettings,
};
#[derive(FromArgs, Resource, Clone)]
/// Config
pub struct Args {
/// if set, raw compressed image data will be cached in this directory. Images that are not BCn compressed are not cached.
#[argh(switch)]
cache: bool,
/// if low_quality is set, only 0.5 byte/px formats will be used (BC1, BC4) unless the alpha channel is in use, then BC3 will be used. When low quality is set, compression is generally faster than CompressionSpeed::UltraFast and CompressionSpeed is ignored.
#[argh(switch)]
low_quality: bool,
}
fn main() {
let args: Args = argh::from_env();
let mut app = App::new();
app.add_plugins(DefaultPlugins)
.insert_resource(MipmapGeneratorSettings {
compression: Some(Default::default()),
compressed_image_data_cache_path: if args.cache {
Some(PathBuf::from("compressed_texture_cache"))
} else {
None
},
low_quality: args.low_quality,
..default()
})
.add_systems(Startup, setup)
// Add MipmapGeneratorPlugin after default plugins
.add_plugins((MipmapGeneratorPlugin, MipmapGeneratorDebugTextPlugin))
// Add material types to be converted
.add_systems(Update, generate_mipmaps::<StandardMaterial>);
app.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
) {
let image_r = create_test_image(2048, -0.8, 0.156, 1);
let mut mat_r = StandardMaterial::from(images.add(image_r));
mat_r.unlit = true;
let image_rg = create_test_image(2048, -0.8, 0.156, 2);
let mut mat_rg = StandardMaterial::from(images.add(image_rg));
mat_rg.unlit = true;
let image_rgba = create_test_image(2048, -0.8, 0.156, 4);
let mut mat_rgba = StandardMaterial::from(images.add(image_rgba));
mat_rgba.unlit = true;
let plane_h = meshes.add(Plane3d::default().mesh().size(20.0, 30.0));
// planes
commands.spawn((
Mesh3d(plane_h.clone()),
MeshMaterial3d(materials.add(mat_r)),
Transform::from_xyz(-3.0, 0.0, 0.0).with_rotation(Quat::from_rotation_z(-PI * 0.5)),
));
commands.spawn((
Mesh3d(plane_h.clone()),
MeshMaterial3d(materials.add(mat_rg)),
Transform::from_xyz(3.0, 0.0, 0.0).with_rotation(Quat::from_rotation_z(PI * 0.5)),
));
commands.spawn((
Mesh3d(plane_h.clone()),
MeshMaterial3d(materials.add(mat_rgba)),
Transform::from_xyz(0.0, -3.0, 0.0),
));
// camera
commands.spawn((
Camera3d::default(),
Transform::from_xyz(0.0, 0.0, 18.0).looking_at(Vec3::ZERO, Vec3::Y),
));
}
fn create_test_image(size: u32, cx: f32, cy: f32, channels: u32) -> Image {
let data: Vec<u8> = (0..size * size)
.flat_map(|id| {
let mut x = 4.0 * (id % size) as f32 / (size - 1) as f32 - 2.0;
let mut y = 2.0 * (id / size) as f32 / (size - 1) as f32 - 1.0;
let mut count = 0;
while count < 0xFF && x * x + y * y < 4.0 {
let old_x = x;
x = x * x - y * y + cx;
y = 2.0 * old_x * y + cy;
count += 1;
}
let mut values = vec![0xFF - (count * 2) as u8];
if channels > 1 {
values.push(0xFF - (count * 5) as u8);
}
if channels > 2 {
values.push(0xFF - (count * 13) as u8);
values.push(u8::MAX);
}
values
})
.collect();
Image {
texture_descriptor: TextureDescriptor {
label: None,
size: Extent3d {
width: size,
height: size,
..default()
},
dimension: TextureDimension::D2,
format: if channels == 1 {
TextureFormat::R8Unorm
} else if channels == 2 {
TextureFormat::Rg8Unorm
} else {
TextureFormat::Rgba8UnormSrgb
},
mip_level_count: 1,
sample_count: 1,
usage: TextureUsages::TEXTURE_BINDING | TextureUsages::COPY_DST,
view_formats: &[],
},
data: Some(data),
..Default::default()
}
}
@@ -0,0 +1,90 @@
use bevy::{
prelude::*,
render::render_resource::{
Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
},
};
use mipmap_generator::{generate_mipmaps, MipmapGeneratorDebugTextPlugin, MipmapGeneratorPlugin};
fn main() {
let mut app = App::new();
app.add_plugins(DefaultPlugins)
.add_systems(Startup, setup)
// Add MipmapGeneratorPlugin after default plugins
.add_plugins((MipmapGeneratorPlugin, MipmapGeneratorDebugTextPlugin))
// Add material types to be converted
.add_systems(Update, generate_mipmaps::<StandardMaterial>);
app.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
) {
let image = create_test_image(4096, -0.8, 0.156);
// plane
commands.spawn((
Mesh3d(meshes.add(Plane3d::default().mesh().size(20.0, 20.0))),
MeshMaterial3d(materials.add(StandardMaterial::from(images.add(image)))),
));
// light
commands.spawn((
PointLight {
intensity: 1500.0 * 1000.0,
shadows_enabled: false,
..default()
},
Transform::from_xyz(4.0, 8.0, 4.0),
));
// camera
commands.spawn((
Camera3d::default(),
Transform::from_xyz(0.0, 0.5, 10.0).looking_at(Vec3::ZERO, Vec3::Y),
));
}
fn create_test_image(size: u32, cx: f32, cy: f32) -> Image {
use std::iter;
let data = (0..size * size)
.flat_map(|id| {
// get high five for recognizing this ;)
let mut x = 4.0 * (id % size) as f32 / (size - 1) as f32 - 2.0;
let mut y = 2.0 * (id / size) as f32 / (size - 1) as f32 - 1.0;
let mut count = 0;
while count < 0xFF && x * x + y * y < 4.0 {
let old_x = x;
x = x * x - y * y + cx;
y = 2.0 * old_x * y + cy;
count += 1;
}
iter::once(0xFF - (count * 2) as u8)
.chain(iter::once(0xFF - (count * 5) as u8))
.chain(iter::once(0xFF - (count * 13) as u8))
.chain(iter::once(u8::MAX))
})
.collect();
Image {
texture_descriptor: TextureDescriptor {
label: None,
size: Extent3d {
width: size,
height: size,
..default()
},
dimension: TextureDimension::D2,
format: TextureFormat::Rgba8UnormSrgb,
mip_level_count: 1,
sample_count: 1,
usage: TextureUsages::TEXTURE_BINDING | TextureUsages::COPY_DST,
view_formats: &[],
},
data: Some(data),
..Default::default()
}
}
@@ -0,0 +1,954 @@
#[cfg(feature = "compress")]
use std::{
fs::{self, File},
hash::{DefaultHasher, Hash, Hasher},
io::{Read, Write},
path::Path,
};
use anyhow::anyhow;
use fast_image_resize::{ResizeAlg, ResizeOptions, Resizer};
use tracing::warn;
use bevy::{
asset::RenderAssetUsages,
image::{ImageSampler, ImageSamplerDescriptor},
pbr::{ExtendedMaterial, MaterialExtension},
platform::collections::HashMap,
prelude::*,
render::render_resource::{Extent3d, TextureDataOrder, TextureDimension, TextureFormat},
tasks::{AsyncComputeTaskPool, Task},
};
use futures_lite::future;
use image::{imageops::FilterType, DynamicImage, ImageBuffer};
#[derive(Resource, Deref)]
pub struct DefaultSampler(ImageSamplerDescriptor);
#[derive(Resource, Clone)]
pub struct MipmapGeneratorSettings {
/// Valid values: 1, 2, 4, 8, and 16.
pub anisotropic_filtering: u16,
pub filter_type: FilterType,
pub minimum_mip_resolution: u32,
/// Set to Some(CompressionSpeed) to enable compression.
/// The compress feature also needs to be enabled. Only BCn currently supported.
/// Compression can take a long time, CompressionSpeed::UltraFast (default) is recommended.
/// Currently supported conversions:
///- R8Unorm -> Bc4RUnorm
///- Rg8Unorm -> Bc5RgUnorm
///- Rgba8Unorm -> Bc7RgbaUnorm
///- Rgba8UnormSrgb -> Bc7RgbaUnormSrgb
pub compression: Option<CompressionSpeed>,
/// If set, raw compressed image data will be cached in this directory.
/// Images that are not BCn compressed are not cached.
pub compressed_image_data_cache_path: Option<std::path::PathBuf>,
/// If low_quality is set, only 0.5 byte/px formats will be used (BC1, BC4) unless the alpha channel is in use, then BC3 will be used.
/// When low quality is set, compression is generally faster than CompressionSpeed::UltraFast and CompressionSpeed is ignored.
// TODO: low_quality normals should probably use BC5 or BC7 as they looks quite bad at BC1
pub low_quality: bool,
}
impl Default for MipmapGeneratorSettings {
fn default() -> Self {
Self {
// Default to 8x anisotropic filtering
anisotropic_filtering: 8,
filter_type: FilterType::Triangle,
minimum_mip_resolution: 1,
compression: None,
compressed_image_data_cache_path: None,
low_quality: false,
}
}
}
#[derive(Default, Clone, Copy, Hash)]
pub enum CompressionSpeed {
#[default]
UltraFast,
VeryFast,
Fast,
Medium,
Slow,
}
impl CompressionSpeed {
#[cfg(feature = "compress")]
fn get_bc7_encoder(&self, has_alpha: bool) -> intel_tex_2::bc7::EncodeSettings {
if has_alpha {
match self {
CompressionSpeed::UltraFast => intel_tex_2::bc7::alpha_ultra_fast_settings(),
CompressionSpeed::VeryFast => intel_tex_2::bc7::alpha_very_fast_settings(),
CompressionSpeed::Fast => intel_tex_2::bc7::alpha_fast_settings(),
CompressionSpeed::Medium => intel_tex_2::bc7::alpha_basic_settings(),
CompressionSpeed::Slow => intel_tex_2::bc7::alpha_slow_settings(),
}
} else {
match self {
CompressionSpeed::UltraFast => intel_tex_2::bc7::opaque_ultra_fast_settings(),
CompressionSpeed::VeryFast => intel_tex_2::bc7::opaque_very_fast_settings(),
CompressionSpeed::Fast => intel_tex_2::bc7::opaque_fast_settings(),
CompressionSpeed::Medium => intel_tex_2::bc7::opaque_basic_settings(),
CompressionSpeed::Slow => intel_tex_2::bc7::opaque_slow_settings(),
}
}
}
}
///Mipmaps will not be generated for materials found on entities that also have the `NoMipmapGeneration` component.
#[derive(Component)]
pub struct NoMipmapGeneration;
#[derive(Resource, Default)]
pub struct MipmapGenerationProgress {
pub processed: u32,
pub total: u32,
/// Tracks the amount of bytes that have been cached since startup.
/// Used to warn at 1GB increments to avoid continuously caching images that change every frame.
pub cached_data_size_bytes: usize,
}
fn format_bytes_size(size_in_bytes: usize) -> String {
if size_in_bytes < 1_000 {
format!("{}B", size_in_bytes)
} else if size_in_bytes < 1_000_000 {
format!("{:.2}KB", size_in_bytes as f64 / 1e3)
} else if size_in_bytes < 1_000_000_000 {
format!("{:.2}MB", size_in_bytes as f64 / 1e6)
} else {
format!("{:.2}GB", size_in_bytes as f64 / 1e9)
}
}
pub struct MipmapGeneratorPlugin;
impl Plugin for MipmapGeneratorPlugin {
fn build(&self, app: &mut App) {
if let Some(image_plugin) = app
.init_resource::<MipmapGenerationProgress>()
.get_added_plugins::<ImagePlugin>()
.first()
{
let default_sampler = image_plugin.default_sampler.clone();
app.insert_resource(DefaultSampler(default_sampler))
.init_resource::<MipmapGeneratorSettings>();
} else {
warn!("No ImagePlugin found. Try adding MipmapGeneratorPlugin after DefaultPlugins");
}
}
}
#[derive(Clone, Resource)]
#[cfg(feature = "debug_text")]
pub struct MipmapGeneratorDebugTextPlugin;
#[cfg(feature = "debug_text")]
impl Plugin for MipmapGeneratorDebugTextPlugin {
fn build(&self, app: &mut App) {
app.insert_resource(self.clone())
.add_systems(Startup, init_loading_text)
.add_systems(Update, update_loading_text);
}
}
#[cfg(feature = "debug_text")]
fn init_loading_text(mut commands: Commands) {
commands
.spawn((
Node {
left: Val::Px(1.5),
top: Val::Px(1.5),
..default()
},
GlobalZIndex(-1),
))
.with_children(|parent| {
parent.spawn((
Text::new(""),
TextFont {
font_size: 18.0,
..default()
},
TextColor(Color::BLACK),
MipmapGeneratorDebugLoadingText,
));
});
commands.spawn(Node::default()).with_children(|parent| {
parent.spawn((
Text::new(""),
TextFont {
font_size: 18.0,
..default()
},
TextColor(Color::WHITE),
MipmapGeneratorDebugLoadingText,
));
});
}
#[cfg(feature = "debug_text")]
#[derive(Component)]
pub struct MipmapGeneratorDebugLoadingText;
#[cfg(feature = "debug_text")]
fn update_loading_text(
mut texts: Query<(&mut Text, &mut TextColor), With<MipmapGeneratorDebugLoadingText>>,
progress: Res<MipmapGenerationProgress>,
time: Res<Time>,
) {
for (mut text, mut color) in &mut texts {
text.0 = format!(
"bevy_mod_mipmap_generator progress: {} / {}\n{}",
progress.processed,
progress.total,
if progress.cached_data_size_bytes > 0 {
format!(
"Cached this run: {}",
format_bytes_size(progress.cached_data_size_bytes)
)
} else {
String::new()
}
);
let alpha = if progress.processed == progress.total {
(color.0.alpha() - time.delta_secs() * 0.25).max(0.0)
} else {
1.0
};
color.0.set_alpha(alpha);
}
}
pub struct TaskData {
added_cache_size: usize,
image: Image,
}
#[derive(Resource, Default, Deref, DerefMut)]
#[allow(clippy::type_complexity)]
pub struct MipmapTasks<M: Material + GetImages>(
HashMap<Handle<Image>, (Task<TaskData>, Vec<AssetId<M>>)>,
);
#[derive(Component, Clone, Debug, Deref, DerefMut, Reflect, PartialEq, Eq)]
pub struct MaterialHandle<M: Material + GetImages>(pub Handle<M>);
#[allow(clippy::too_many_arguments)]
pub fn generate_mipmaps<M: Material + GetImages>(
mut commands: Commands,
mut material_events: MessageReader<AssetEvent<M>>,
mut materials: ResMut<Assets<M>>,
no_mipmap: Query<&MaterialHandle<M>, With<NoMipmapGeneration>>,
mut images: ResMut<Assets<Image>>,
default_sampler: Res<DefaultSampler>,
mut progress: ResMut<MipmapGenerationProgress>,
settings: Res<MipmapGeneratorSettings>,
mut tasks_res: Option<ResMut<MipmapTasks<M>>>,
) {
let mut new_tasks = MipmapTasks(HashMap::new());
let tasks = if let Some(ref mut tasks) = tasks_res {
tasks
} else {
&mut new_tasks
};
let thread_pool = AsyncComputeTaskPool::get();
'outer: for event in material_events.read() {
let material_h = match event {
AssetEvent::Added { id } => id,
AssetEvent::LoadedWithDependencies { id } => id,
_ => continue,
};
for m in no_mipmap.iter() {
if m.id() == *material_h {
continue 'outer;
}
}
// get_mut(material_h) here so we see the filtering right away
// and even if mipmaps aren't made, we still get the filtering
if let Some(material) = materials.get_mut(*material_h) {
for image_h in material.get_images().into_iter() {
if let Some((_, material_handles)) = tasks.get_mut(image_h) {
material_handles.push(*material_h);
continue; //There is already a task for this image
}
if let Some(image) = images.get_mut(image_h) {
let mut descriptor = match image.sampler.clone() {
ImageSampler::Default => default_sampler.0.clone(),
ImageSampler::Descriptor(descriptor) => descriptor,
};
descriptor.anisotropy_clamp = settings.anisotropic_filtering;
image.sampler = ImageSampler::Descriptor(descriptor);
if image.texture_descriptor.mip_level_count == 1
&& check_image_compatible(image).is_ok()
{
let mut image = image.clone();
let settings = settings.clone();
let mut added_cache_size = 0;
let task = thread_pool.spawn(async move {
match generate_mips_texture(
&mut image,
&settings.clone(),
&mut added_cache_size,
) {
Ok(_) => (),
Err(e) => warn!("{}", e),
}
TaskData {
added_cache_size,
image,
}
});
tasks.insert(image_h.clone(), (task, vec![*material_h]));
progress.total += 1;
}
}
}
}
}
fn bytes_to_gb(bytes: usize) -> usize {
bytes / 1024_usize.pow(3)
}
tasks.retain(|image_h, (task, material_handles)| {
match future::block_on(future::poll_once(task)) {
Some(task_data) => {
if let Some(image) = images.get_mut(image_h) {
*image = task_data.image;
progress.processed += 1;
let prev_cached_data_gb = bytes_to_gb(progress.cached_data_size_bytes);
progress.cached_data_size_bytes += task_data.added_cache_size;
let current_cached_data_gb = bytes_to_gb(progress.cached_data_size_bytes);
if current_cached_data_gb > prev_cached_data_gb {
warn!(
"Generated cached texture data from just this run is {}",
format_bytes_size(progress.cached_data_size_bytes)
);
}
// Touch material to trigger change detection
for material_h in material_handles.iter() {
let _ = materials.get_mut(*material_h);
}
}
false
}
None => true,
}
});
if tasks_res.is_none() {
commands.insert_resource(new_tasks);
}
}
/// `added_cache_size` is for tracking the amount of data that was cached by this call.
/// Compressed BCn data is cached on disk if cache_compressed_image_data is enabled.
pub fn generate_mips_texture(
image: &mut Image,
settings: &MipmapGeneratorSettings,
#[allow(unused)] added_cache_size: &mut usize,
) -> anyhow::Result<()> {
check_image_compatible(image)?;
match try_into_dynamic(image.clone()) {
Ok(mut dyn_image) => {
#[allow(unused_mut)]
let mut has_alpha = false;
#[cfg(feature = "compress")]
if let Some(img) = dyn_image.as_rgba8() {
for px in img.pixels() {
if px.0[3] != 255 {
has_alpha = true;
break;
}
}
}
#[cfg(feature = "compress")]
let mut compressed_format = None;
#[allow(unused_mut)]
let mut compression_speed = settings.compression;
#[cfg(feature = "compress")]
{
if let Some(encoder_setting) = settings.compression {
compressed_format = bcn_equivalent_format_of_dyn_image(
&dyn_image,
image.texture_descriptor.format.is_srgb(),
settings.low_quality,
has_alpha,
)
.ok();
compression_speed = compressed_format.map(|_| encoder_setting);
}
}
#[cfg(feature = "compress")]
let mut input_hash = u64::MAX;
#[allow(unused_mut)]
let mut loaded_from_cache = false;
let mut new_image_data = Vec::new();
#[cfg(feature = "compress")]
if compression_speed.is_some()
&& compressed_format.is_some()
&& let Some(cache_path) = &settings.compressed_image_data_cache_path
{
input_hash = calculate_hash(image, settings);
if let Some(compressed_image_data) = load_from_cache(input_hash, cache_path) {
new_image_data = compressed_image_data;
loaded_from_cache = true;
}
}
let mip_count = calculate_mip_count(
dyn_image.width(),
dyn_image.height(),
settings.minimum_mip_resolution,
u32::MAX,
compression_speed,
);
if !loaded_from_cache {
new_image_data = generate_mips(&mut dyn_image, has_alpha, mip_count, settings);
#[cfg(feature = "compress")]
if let Some(cache_path) = &settings.compressed_image_data_cache_path
&& compression_speed.is_some()
&& compressed_format.is_some()
{
*added_cache_size += new_image_data.len();
save_to_cache(input_hash, &new_image_data, cache_path).unwrap();
}
}
image.texture_descriptor.mip_level_count = mip_count;
#[cfg(feature = "compress")]
if let Some(format) = compressed_format {
image.texture_descriptor.format = format;
// Remove view formats for compressed textures.
// TODO Is this an issue? A bit difficult to work around since it's &['static]
image.texture_descriptor.view_formats = &[];
}
image.data = Some(new_image_data);
Ok(())
}
Err(e) => Err(e),
}
}
/// Returns a vec of bytes containing the image data for all generated mips.
/// Use `calculate_mip_count()` to find the value for `mip_count`.
pub fn generate_mips(
dyn_image: &mut DynamicImage,
has_alpha: bool,
mip_count: u32,
settings: &MipmapGeneratorSettings,
) -> Vec<u8> {
let mut width = dyn_image.width();
let mut height = dyn_image.height();
#[allow(unused_mut)]
let mut compressed_image_data = None;
#[cfg(feature = "compress")]
if let Some(compression_settings) = settings.compression {
compressed_image_data = bcn_compress_dyn_image(
compression_settings,
dyn_image,
has_alpha,
settings.low_quality,
)
.ok();
}
#[cfg(not(feature = "compress"))]
if settings.compression.is_some() {
warn!("Compression is Some but compress feature is disabled. Falling back to generating mips without compression.")
}
let mut image_data = compressed_image_data.unwrap_or(dyn_image.as_bytes().to_vec());
#[cfg(feature = "compress")]
let min = if settings.compression.is_some() { 4 } else { 1 };
#[cfg(not(feature = "compress"))]
let min = 1;
let mut resizer = Resizer::new();
let resize_alg = ResizeOptions::new()
.resize_alg(match settings.filter_type {
FilterType::Nearest => ResizeAlg::Nearest,
FilterType::Triangle => ResizeAlg::Convolution(fast_image_resize::FilterType::Bilinear),
FilterType::CatmullRom => {
ResizeAlg::Convolution(fast_image_resize::FilterType::CatmullRom)
}
FilterType::Gaussian => ResizeAlg::Convolution(fast_image_resize::FilterType::Gaussian),
FilterType::Lanczos3 => ResizeAlg::Convolution(fast_image_resize::FilterType::Lanczos3),
})
.use_alpha(has_alpha);
for _ in 0..mip_count {
width /= 2;
height /= 2;
// *dyn_image = dyn_image.resize_exact(width, height, settings.filter_type); // Ex: Resizing with Image crate
let mut new = DynamicImage::new(width, height, dyn_image.color());
resizer.resize(dyn_image, &mut new, &resize_alg).unwrap();
*dyn_image = new;
#[allow(unused_mut)]
let mut compressed_image_data = None;
#[cfg(feature = "compress")]
if let Some(compression_speed) = settings.compression {
// https://github.com/bevyengine/bevy/issues/21490
if width >= 4 && height >= 4 {
compressed_image_data = bcn_compress_dyn_image(
compression_speed,
dyn_image,
has_alpha,
settings.low_quality,
)
.ok();
}
}
image_data.append(&mut compressed_image_data.unwrap_or(dyn_image.as_bytes().to_vec()));
if width <= min || height <= min {
break;
}
}
image_data
}
/// Returns the number of mip levels
/// The `max_mip_count` includes the first input mip level. So setting this to 2 will
/// result in a single additional mip level being generated, for a total of 2 levels.
pub fn calculate_mip_count(
mut width: u32,
mut height: u32,
minimum_mip_resolution: u32,
max_mip_count: u32,
#[allow(unused)] compression: Option<CompressionSpeed>,
) -> u32 {
let mut mip_level_count = 1;
#[cfg(feature = "compress")]
let min = if compression.is_some() { 4 } else { 1 };
#[cfg(not(feature = "compress"))]
let min = 1;
// Use log to avoid loop? Are there edge cases with rounding?
while width / 2 >= minimum_mip_resolution.max(min)
&& height / 2 >= minimum_mip_resolution.max(min)
&& mip_level_count < max_mip_count
{
width /= 2;
height /= 2;
mip_level_count += 1;
}
mip_level_count
}
/// Extract a specific individual mip level as a new image.
pub fn extract_mip_level(image: &Image, mip_level: u32) -> anyhow::Result<Image> {
check_image_compatible(image)?;
let descriptor = &image.texture_descriptor;
if descriptor.mip_level_count < mip_level {
return Err(anyhow!(
"Mip level {mip_level} requested, but only {} are available.",
descriptor.mip_level_count
));
}
let block_size = descriptor.format.block_copy_size(None).unwrap() as usize;
//let mip_factor = 2u32.pow(mip_level - 1);
//let final_width = descriptor.size.width/mip_factor;
//let final_height = descriptor.size.height/mip_factor;
let mut width = descriptor.size.width as usize;
let mut height = descriptor.size.height as usize;
let mut byte_offset = 0usize;
for _ in 0..mip_level - 1 {
byte_offset += width * block_size * height;
width /= 2;
height /= 2;
}
let mut new_descriptor = descriptor.clone();
new_descriptor.mip_level_count = 1;
new_descriptor.size = Extent3d {
width: width as u32,
height: height as u32,
depth_or_array_layers: 1,
};
Ok(Image {
data: image
.data
.as_ref()
.map(|data| data[byte_offset..byte_offset + (width * block_size * height)].to_vec()),
data_order: TextureDataOrder::default(),
texture_descriptor: new_descriptor,
sampler: image.sampler.clone(),
texture_view_descriptor: image.texture_view_descriptor.clone(),
asset_usage: RenderAssetUsages::default(),
copy_on_resize: false,
})
}
pub fn check_image_compatible(image: &Image) -> anyhow::Result<()> {
if image.data.is_none() {
return Err(anyhow!(
"Image is a GPU storage texture which is not supported."
));
}
if image.is_compressed() {
return Err(anyhow!("Compressed images not supported"));
}
let descriptor = &image.texture_descriptor;
if descriptor.dimension != TextureDimension::D2 {
return Err(anyhow!(
"Image has dimension {:?} but only TextureDimension::D2 is supported.",
descriptor.dimension
));
}
if descriptor.size.depth_or_array_layers != 1 {
return Err(anyhow!(
"Image contains {} layers only a single layer is supported.",
descriptor.size.depth_or_array_layers
));
}
Ok(())
}
// Implement the GetImages trait for any materials that need conversion
pub trait GetImages {
fn get_images(&self) -> Vec<&Handle<Image>>;
}
impl GetImages for StandardMaterial {
fn get_images(&self) -> Vec<&Handle<Image>> {
vec![
&self.base_color_texture,
&self.emissive_texture,
&self.metallic_roughness_texture,
&self.normal_map_texture,
&self.occlusion_texture,
]
.into_iter()
.flatten()
.collect()
}
}
impl<T: GetImages + MaterialExtension> GetImages for ExtendedMaterial<StandardMaterial, T> {
fn get_images(&self) -> Vec<&Handle<Image>> {
let mut images: Vec<&Handle<Image>> = vec![
&self.base.base_color_texture,
&self.base.emissive_texture,
&self.base.metallic_roughness_texture,
&self.base.normal_map_texture,
&self.base.occlusion_texture,
&self.base.depth_map,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.diffuse_transmission_texture,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.specular_transmission_texture,
#[cfg(feature = "pbr_transmission_textures")]
&self.base.thickness_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_roughness_texture,
#[cfg(feature = "pbr_multi_layer_material_textures")]
&self.base.clearcoat_normal_texture,
#[cfg(feature = "pbr_anisotropy_texture")]
&self.base.anisotropy_texture,
#[cfg(feature = "pbr_specular_textures")]
&self.base.specular_texture,
#[cfg(feature = "pbr_specular_textures")]
&self.base.specular_tint_texture,
]
.into_iter()
.flatten()
.collect();
images.append(&mut self.extension.get_images());
images
}
}
pub fn try_into_dynamic(image: Image) -> anyhow::Result<DynamicImage> {
let Some(image_data) = image.data else {
return Err(anyhow!(
"Conversion into dynamic image not supported for GPU storage texture."
));
};
match image.texture_descriptor.format {
TextureFormat::R8Unorm => ImageBuffer::from_raw(
image.texture_descriptor.size.width,
image.texture_descriptor.size.height,
image_data,
)
.map(DynamicImage::ImageLuma8),
TextureFormat::Rg8Unorm => ImageBuffer::from_raw(
image.texture_descriptor.size.width,
image.texture_descriptor.size.height,
image_data,
)
.map(DynamicImage::ImageLumaA8),
TextureFormat::Rgba8UnormSrgb => ImageBuffer::from_raw(
image.texture_descriptor.size.width,
image.texture_descriptor.size.height,
image_data,
)
.map(DynamicImage::ImageRgba8),
TextureFormat::Rgba8Unorm => ImageBuffer::from_raw(
image.texture_descriptor.size.width,
image.texture_descriptor.size.height,
image_data,
)
.map(DynamicImage::ImageRgba8),
// Throw and error if conversion isn't supported
texture_format => {
return Err(anyhow!(
"Conversion into dynamic image not supported for {:?}.",
texture_format
))
}
}
.ok_or_else(|| {
anyhow!(
"Failed to convert into {:?}.",
image.texture_descriptor.format
)
})
}
#[cfg(feature = "compress")]
fn bcn_compress_dyn_image(
compression_speed: CompressionSpeed,
dyn_image: &DynamicImage,
has_alpha: bool,
low_quality: bool,
) -> anyhow::Result<Vec<u8>> {
use image::Rgba;
let width = dyn_image.width();
let height = dyn_image.height();
let mut image_data;
if low_quality {
match dyn_image {
DynamicImage::ImageLuma8(data) => {
image_data = vec![0u8; intel_tex_2::bc4::calc_output_size(width, height)];
let surface = intel_tex_2::RSurface {
width,
height,
stride: width,
data,
};
intel_tex_2::bc4::compress_blocks_into(&surface, &mut image_data);
}
DynamicImage::ImageLumaA8(data) => {
let mut rgba =
ImageBuffer::<Rgba<u8>, Vec<u8>>::new(dyn_image.width(), dyn_image.height());
for (rgba_px, rg_px) in rgba.pixels_mut().zip(data.pixels()) {
rgba_px.0[0] = rg_px.0[0];
rgba_px.0[1] = rg_px.0[1];
}
image_data = vec![0u8; intel_tex_2::bc1::calc_output_size(width, height)];
let surface = intel_tex_2::RgbaSurface {
width,
height,
stride: width * 4,
data: rgba.as_raw(),
};
intel_tex_2::bc1::compress_blocks_into(&surface, &mut image_data);
}
DynamicImage::ImageRgba8(data) => {
if has_alpha {
image_data = vec![0u8; intel_tex_2::bc3::calc_output_size(width, height)];
let surface = intel_tex_2::RgbaSurface {
width,
height,
stride: width * 4,
data,
};
intel_tex_2::bc3::compress_blocks_into(&surface, &mut image_data);
} else {
image_data = vec![0u8; intel_tex_2::bc1::calc_output_size(width, height)];
let surface = intel_tex_2::RgbaSurface {
width,
height,
stride: width * 4,
data,
};
intel_tex_2::bc1::compress_blocks_into(&surface, &mut image_data);
}
}
// Throw and error if conversion isn't supported
dyn_image => {
return Err(anyhow!(
"Conversion into dynamic image not supported for {:?}.",
dyn_image
))
}
};
} else {
match dyn_image {
DynamicImage::ImageLuma8(data) => {
image_data = vec![0u8; intel_tex_2::bc4::calc_output_size(width, height)];
let surface = intel_tex_2::RSurface {
width,
height,
stride: width,
data,
};
intel_tex_2::bc4::compress_blocks_into(&surface, &mut image_data);
}
DynamicImage::ImageLumaA8(data) => {
image_data = vec![0u8; intel_tex_2::bc5::calc_output_size(width, height)];
let surface = intel_tex_2::RgSurface {
width,
height,
stride: width * 2,
data,
};
intel_tex_2::bc5::compress_blocks_into(&surface, &mut image_data);
}
DynamicImage::ImageRgba8(data) => {
image_data = vec![0u8; intel_tex_2::bc7::calc_output_size(width, height)];
let surface = intel_tex_2::RgbaSurface {
width,
height,
stride: width * 4,
data,
};
intel_tex_2::bc7::compress_blocks_into(
&compression_speed.get_bc7_encoder(has_alpha),
&surface,
&mut image_data,
);
}
// Throw and error if conversion isn't supported
dyn_image => {
return Err(anyhow!(
"Conversion into dynamic image not supported for {:?}.",
dyn_image
))
}
};
}
Ok(image_data)
}
/// If low_quality is set, only 0.5 byte/px formats will be used (BC1, BC4) unless alpha is being used (BC3)
pub fn bcn_equivalent_format_of_dyn_image(
dyn_image: &DynamicImage,
is_srgb: bool,
low_quality: bool,
has_alpha: bool,
) -> anyhow::Result<TextureFormat> {
if dyn_image.width() < 4 || dyn_image.height() < 4 {
return Err(anyhow!("Image size too small for BCn compression"));
}
if low_quality {
match dyn_image {
DynamicImage::ImageLuma8(_) => Ok(TextureFormat::Bc4RUnorm),
DynamicImage::ImageLumaA8(_) => Ok(TextureFormat::Bc1RgbaUnorm),
DynamicImage::ImageRgba8(_) => Ok(if has_alpha {
if is_srgb {
TextureFormat::Bc3RgbaUnormSrgb
} else {
TextureFormat::Bc3RgbaUnorm
}
} else if is_srgb {
TextureFormat::Bc1RgbaUnormSrgb
} else {
TextureFormat::Bc1RgbaUnorm
}),
// Throw and error if conversion isn't supported
dyn_image => Err(anyhow!(
"Conversion into dynamic image not supported for {:?}.",
dyn_image
)),
}
} else {
match dyn_image {
DynamicImage::ImageLuma8(_) => Ok(TextureFormat::Bc4RUnorm),
DynamicImage::ImageLumaA8(_) => Ok(TextureFormat::Bc5RgUnorm),
DynamicImage::ImageRgba8(_) => Ok(if is_srgb {
TextureFormat::Bc7RgbaUnormSrgb
} else {
TextureFormat::Bc7RgbaUnorm
}),
// Throw and error if conversion isn't supported
dyn_image => Err(anyhow!(
"Conversion into dynamic image not supported for {:?}.",
dyn_image
)),
}
}
}
/// Calculate the hash for the non-compressed non-mipmapped image.
#[cfg(feature = "compress")]
fn calculate_hash(image: &Image, settings: &MipmapGeneratorSettings) -> u64 {
let mut hasher = DefaultHasher::new();
image.data.hash(&mut hasher);
if settings.low_quality {
(934870234u32).hash(&mut hasher);
}
settings.compression.hash(&mut hasher);
match settings.filter_type {
FilterType::Nearest => (934870234u32).hash(&mut hasher),
FilterType::Triangle => (46345624u32).hash(&mut hasher),
FilterType::CatmullRom => (54676234u32).hash(&mut hasher),
FilterType::Gaussian => (623455643u32).hash(&mut hasher),
FilterType::Lanczos3 => (675856584u32).hash(&mut hasher),
}
image.texture_descriptor.hash(&mut hasher);
hasher.finish()
}
/// Save raw image bytes to disk cache
#[cfg(feature = "compress")]
fn save_to_cache(hash: u64, bytes: &[u8], cache_dir: &Path) -> std::io::Result<()> {
if !cache_dir.exists() {
fs::create_dir(cache_dir)?;
}
let file_path = cache_dir.join(format!("{:x}", hash));
let mut file = File::create(file_path)?;
file.write_all(&zstd::encode_all(bytes, 0).unwrap())?;
Ok(())
}
/// Load from disk cache for matching input hash
#[cfg(feature = "compress")]
fn load_from_cache(hash: u64, cache_dir: &Path) -> Option<Vec<u8>> {
let file_path = cache_dir.join(format!("{:x}", hash));
if !file_path.exists() {
return None;
}
let Ok(mut file) = File::open(file_path) else {
return None;
};
let mut cached_bytes = Vec::new();
if file.read_to_end(&mut cached_bytes).is_err() {
return None;
};
zstd::decode_all(cached_bytes.as_slice()).ok()
}