A list of puns related to "Depth Buffer"
So I finally managed to get Reshade and Enb to work together in skyrim, but now I get a weird depth buffer effect , as if it's burnt into my monitor or something, shown here: https://imgur.com/a/BZnZYz8
It goes away if I select the "disable INTZ" option or whatever, but then none of the effects that require a depth map work, specifically RTGI, which is literally the only option I'm trying to use on reshade. Anybody know how to solve this?
eh?
I saw a lot of tutorials on how to get the world coordinates from the depth buffer but I cant find anything that would help me understand how to do the opposite, other than these:
https://tipsfordev.com/writing-the-correct-value-in-the-depth-buffer-when-using-ray-casting
https://www.iquilezles.org/www/articles/raypolys/raypolys.htm
http://blog.hvidtfeldts.net/index.php/2014/01/combining-ray-tracing-and-polygons/
I still dont get it... I'm trying to render voxels with ray casting and calculate their normals for lighting. I heard I can calculate normals using the depth buffer. The problem is I need to draw my raycast objects on the depth buffer m a n u a l l y .
I can't seem to do it right. The three tutorials I found varied too much to find a good middle ground to understand any of it. I'de appreciate all the help that I can get.
EDIT:
I figured out why it wasn't working for me. The depth function that I was using dipped bellow 0 going from -1 to 1. I was using Godot and their depth value only goes from 0 to 1 so I needed to shift the function up.
The Code:
camera_forward = -camera_basis[2]; vec3 cam_to_hit = hit_position-camera_origin; //hit_position is world space position
float diff = far - near;
float eyeHitZ = -length(cam_to_hit) *dot(camera_forward,normalize(cam_to_hit));
float ndcDepth = ((far+near) + (2.0farnear)/eyeHitZ)/(far-near); // goes from -1 to 1
DEPTH = (ndcDepth+1f)/2f; // Godot's depth buffer only ever goes from 0 to 1
(In Godot 3.4's shader language, DEPTH is mapped to gl_FragDepth and this calculation is based on the third link.)
Now I can render regular polygon meshs with my raycast voxels: https://imgur.com/b5J1otQ
I'm doing some physics simulation rendering of a traveling wave. It's basically a 10x10x10 instanced set of cubes, with the world position of each vertex passed to the fragment shader, and then put through a wave equation:
E = 1 * cos(dot(worldPos, k_vector) - omega*time)
https://reddit.com/link/rpuex6/video/q7omdr8xt4881/player
(the reason for instancing is because eventually I want to do standing wave simulations which won't work for world position interpolation).
All of this is calculated on the fragment shader. The E-field value is passed as the red component of the fragment color. Time is just the simulation time (uniform).
What I'm struggling with is the following:
I've tried not clearing the depth buffer and enabling blending with source = alpha and destination = 1 - src alpha. I've messed around with a few other things and I'm at a loss. I really just want to render all these cubes - even the ones *behind* the cubes in front.
The video attached just shows solid coloring (no blending) for clarity.
Plz help.
Edit: tried uploading video and I think it failed. Added it to the body of the post
Edit 2: thanks for those who responded. This is the solution I landed on, I'm pretty happy with the visualization:
I got 7 Days to Die all set to look nice, with some fog and all, but sadly the weapon viewmodel is not on the same buffer or channel - you can even tell in game because it has a lower fps then the world.
Can Reshade do anything with this second buffer? I would add effects to is (a blur maybe), or just merge it with the world's depth buffer so that fog and other effects don't stack on top of it.
Thanks!
Can't seem to get reshade to see it. DisplayDepth just shows white or black, depending if I reverse it.
With out it I cant use mxao and rtgi (which I really want to test out xP )
When trying to run Vindictus in DirectX 11 mode, and when ReShade's "Copy depth buffer before clear operations" is enabled, it crashes at every loading menu, towards the end of loading.
The following things did not make a difference:
The only things that stop the crashing are unticking "Copy depth buffer..." or running the game on DirectX9 and ReShade as d3d9.dll. I've been running ReShade fine on DirectX9 for a year now.
Any ideas what else I could try? I know another ReShade user is running Vindictus on Dx11 and ReShade 4.7 on it just fine. He couldn't figure out from log files what was wrong for me.
Resolved. Turns out something had edited my fallout_default ini on both my active install and backup copy of the game. Reinstalled the game fresh and everything works again.
Ty u/Celtic_Spike
Hey all, so I've been replaying New Vegas recently and I'm using MXAO and the RTGI shaders to improve the overall look of the game, and so far things have been working fine -- until yesterday. I booted up the game after work and noticed this (image linked below)https://imgur.com/RbVSw71I haven't changed anything, turned any settings on/off since it was working the day prior, and I've tried everything I could think of to fix it, including reinstalling the game, going through game mods, turning every display related game setting on and off, nothing seems to fix it.
Does anyone have any idea why the depth buffer is so messed up all of the sudden?
Does anyone know how to get the depth buffer for Reshade working in any of the games?
I've seen other people do it to other games but I can't seem to *at least* get A New Beginning to work.
preferbaly using the vulkan backend.. but if not other backends will do.
Is there any way of getting depth buffers to work if all I get is just a solid colour? there's only one buffer, and I'm using a DX8 game through a DX9 wrapper, don't know if that's the issue.
I think it has to do with the UI being an overlay over the whole game or something.
If anyone can help I will give the game to you to find a solution if possible.
I'm currently playing the original Just Cause on pc but, after seeing what the Xbox360 version has better than it, I decided to try and improve upon its look. The major issue I see in it is a shadow issue of some sort (maybe intended somehow): shadows are rendered much lighter on top of grass than on the roads. And at night time they completely vanish from any textures but the roads. This leads them to be almost unnoticeable, leaving the game world very bland and with low contrast.
While messing with Reshade and Special K, i've noticed the game renders shadows in the game world via a 1024x1024 "vertex buffer(?)" of some sort. Changing random values in Cheat Engine from "1024" to "2048" or "4096" was what eventually led me to this finding. When this "buffer"s width or height resolution gets changed, the shadows get completely misalignedΒ from the world objects and move around with the camera perspective. I eventually stumbled upon the d3d9 tab on the top menu of Reshade, where this particular "buffer" shows up. Also, this "buffer" shows up in Special K under the "Live Render Target View" with a D24S8 format.
To the main question:
Would it be possible, either with Reshade, a modified .dll or any other tool (e.g. Cheat Engine), to use this "buffer"'s information to render new shadows in screen space atop the old ones or even replace them altogether?
Here's a link to anΒ album withΒ captioned images to better demonstrate my points,Β Just in Cause: imgur.com/a/rIirvl0
Thanks for reading!
After hours of trying to figure out why it stopped working. I saw mirror's edge - no depth buffer with msaa/flickering just under metro in the compatibility list. remembering I had noticed some weird flicking on half of the screen after changing the live preview setting, I went and changed msaa back to aaa. (found out recently 4x msaa didnt have this one issue of banding at high angles that aaa did so I switched it) And once I did that it was immediately fixed, didnt even have to reload reshade
Hi everyone,
I need to create a custom fog effect to properly simulate light attenutation underwater in a VR simulation.
I'm have some experience with Unity and I have (very) basic skills with shaders. I have written a few basic screen space shaders for the desktop using the camera depth texture.
I'm not having much luck finding anything decent on how to write a screen space shader for URP and VR, particularly with the use of the camera depth texture.
Could anyone point me at examples to help?
Many thanks.
-- Update
I gave up on screen space shaders in URP for VR. There's a bit of documentation about but it all seems out of date. Things that apparently worked in 2020 are no longer working in 2021. I'm moving back to using the inbuilt renderer where things are fully documented in the official docs. I'll look at custom postprocessing in URP again when it's fully supported and documented.
I wanted to use Marty Mcfly's RTGI shader, but (at least in Halo 3: ODST on steam), the depth display is just a black screen, and the raytracing doesn't do anything, since no geometry is being detected. I've tried countless tutorials, but nothing seems to work. Can anyone help?
We made some transparents object in our scene, and want to add some volume fog for them which need the g-buffer depth.
How can I wirte these transparents object depth into g-buffer?
Hi!
So I've been banging my head against this all morning and the documentation is not as verbose as I apparently need so I thought I would ask here for help! 0_0 Any insight here is VERY much appreciated haha
Basically I'm trying to make an orthographic "fake fog" shader that applies a color on top of a scene with an opacity based on the distance between the rendered pixel and the depth buffer.
My general idea was to have a shader graph that:
The only issue is that I can't figure out how to get the Scene Depth and ScreenPosition/Position nodes to play well together >_<
I've been trying a variety of options lol 9_9
Does anyone have experience with these shader graph nodes? This seems like it should be very easy, but between minimal documentation that actually doesn't explain things like exactly what units the various options retrieve and the general fiddliness of debugging shader output I'm kinda stuck.
anyone know how to enable it? i saw a video of someone having the rt shader in bioshock infinite but there seems to be no depth buffer enabled cause displaydepth just shows a black screen and i cant find a way to fix it.
One point is the I cant see the depth map with in the depth map reshade. But I can see my surroundings while using the lighting debug view in the RTGI options.Does anyone know how to make it work with this game?
So, i've tried to use reshade with KH2:FM on the recently ported KH 1.5 + 2.5 PC version, but the Depth buffer is totally black, and it limits me to what effects i can use.
Does anyone know how to fix this? And why is this happening?
Edit: And, just to clarify, i tried editing the pre-processor settings, but nothing worked.
I'm working on adding effects to my game and I noticed that splitting up the draw calls causes depth to be ignored. I looked around and found that I needed to use DepthStencilState.Default
in order to preserve depth between calls. However, whenever I do this, nothing renders on my screen. How can I fix this?
Here is my Draw method:
public override void Draw(SpriteBatch spriteBatch) {
_graphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Target, Color.CornflowerBlue, 0, 0);
_graphicsDevice.DepthStencilState = DepthStencilState.Default;
// Normal Rendering
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, null, null, cam.getMatrix() * cam.getTransformationMatrix());
levelManager.Draw(spriteBatch);
background.Draw(spriteBatch);
levelEditor.Draw(spriteBatch);
enemyManager.Draw(spriteBatch);
if (paused) {
pausePanel.Draw(spriteBatch);
return_Button.Draw(spriteBatch);
exit_Button.Draw(spriteBatch);
}
spriteBatch.End();
// Using Effect
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, null, fx, cam.getMatrix() * cam.getTransformationMatrix());
player.Draw(spriteBatch);
spriteBatch.End();
}
After doing some debugging, I found that nothing is rendered unless it is at layer 0. I'm not sure why this is happening. Here is my draw call for my sprite class:
public void Draw(SpriteBatch spriteBatch) {
spriteBatch.Draw(AssetManager.textures[textureType], Position, new Rectangle(0, 0, width, height), Color.White, Rotation, Origin, Scale, SpriteEffects.None, Layer);
}
Edit: I'm suspecting the issue lies with problems with the alpha of my sprites (maybe). Using a test project I noticed that the alpha does not carry over between sprite calls:
protected override void Draw(GameTime gameTime) {
GraphicsDevice.Clear(Color.CornflowerBlue);
_spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, null, null, null);
_spriteBatch.Draw(texture, Vector2.Zero, new Rectangle(0, 0, texture.Width, texture.Height), Color.White, 0, Vector2.Zero, new Vector2(10, 10), SpriteEffects.None, 0.4f);
_spriteBatch.End();
_spr
... keep reading on reddit β‘Now I've tried this with multiple games, all with the same result and it hasn't worked until now. It seems that in games, for example Metro 2033 Redux, which is said to be compatible, ReShade seems to be receiving the 3d data (the colorful stuff), but the depth data used for lighting doesn't work in any of the games I've tried. Also tried all sorts of different D3D11/Vulkan settings and changed to logarithmic, but nothing gives me proper depth data. I'm trying to get ReShade raytracing to work using quint, but without that depth info this doesn't work.
Maybe this is because I am using an AMD GPU? Any ideas?
I have a very simple setup - in fact a relatively early stage of vkguide.dev. It contains a single render pass with a single subpass. Those contain a color attachment and a depth attachment.
The color AttachmentDescription
has .initialLayout = VK_IMAGE_LAYOUT_UNDEFINED
and .finalLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
. Its SubPassDescription
, it has .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
.
The depth AttachmentDescription
has .initialLayout = VK_IMAGE_LAYOUT_UNDEFINED
and .finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL
. Its SubPassDescription
, it has .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL
.
When not specifying explicit VkSubpassDependency
's, everything works fine - but Vulkan is just using it's conservative default dependencies. My goal is to to understand this better (or even at all). I notice that when I provice empty dependencies and activate my Synchronization layer, I get:
> Validation Error: [ SYNC-HAZARD-WRITE_AFTER_WRITE ] Object 0: handle = 0x83d4ee000000000b, type = VK_OBJECT_TYPE_RENDER_PASS; | MessageID = 0xfdf9f5e1 | vkCmdBeginRenderPass: Hazard WRITE_AFTER_WRITE vs. layout transition in subpass 0 for attachment 1 aspect depth during load with loadOp VK_ATTACHMENT_LOAD_OP_CLEAR.
(attachment 1 is the depth attachment)
The minimal dependency I managed to write to make this go away is:
const VkSubpassDependency in_dependency{
.srcSubpass = VK_SUBPASS_EXTERNAL,
.dstSubpass = 0,
.srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
.dstStageMask = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
};
But I don't really understand. I've found this definition of a write-after-write error:
> A write-after-write, or WaW, hazard occurs when a programmer expects to overwrite the samelocation in memory multiple times and that only the results of the last write will be visible tosubsequent readers. If the writes are rescheduled with respect to one another, then only the resultof the write that happened to execute last will be visible to readers.
As far as parsing that dependency goes, I understand that it declares a dependency from stuff that happens before the renderpass to the first/only subpass. And that it's about the depth attachment write. I also think this is about image transitions. But I don't really understand why there's a hazard, or w
... keep reading on reddit β‘There are some branch and leaf in our scene, which made by transparents object. We need to write their in G-Buffer depth so that the volume fog could take effect.
How could I made it ?
Thanks for your help.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.