A list of puns related to "Real projective space"
Can someone share with me how they visualize homotopy equivalent between two spaces.
I have always visualize homeomorphism as being able to deform your space in such a way without stretching to morph into one another.
It seems to me homotopy equivalence is something else, I can understand the definition but I'm not getting a good handle of homotopy equivalent. It's almost like homeomorphism but not quite, so it has been really hard for me to get a feel for what I need to do when I'm being asked to prove that two spaces are homotopy equivalent.
Furthermore, I am finding it increasingly hard to visualize many things like, projective plane, connected sum of real projective plane, earlier I was asked to prove things like
the connected sum of projective planes are homotopy equivalent to bouquet of circles.
or that the union of 3 circles laid out on the x-axis horizontally, touching a point on each end is homotopy equivalent to bouquet of 3 circles (I am done with the homework, or rather, the deadline is due so this isn't a homework-seeking help post). (Professor in case you are reading this XD)
I thought I understood homotopy, homotopy equivalent, but I really don't, and I can really use some help.
thanks!
Iβve implemented volumetric light by sampling a depth map inside a cone of light. However, currently, Iβm marching through world space and doing a matrix multiply to get into the lightβs projective space for each sample.
In the talk βhigh fidelity, low complexity - Inside renderingβ (I cant seem to find the slides currently) thereβs one part where they mention stepping along the ray in projective space, but the part that confuses me is that for one thing, the depth is nonlinearly distributed after converting to the projection space, so itβs not so clear how to simply βstepβ through the space (to avoid the matrix multiply I guess). Is this even possible? Thanks
UPDATE: https://imgur.com/UYQhmRp - it works!
Using a 4-dimensional array in projection space for the light and as my step vector worked. Then, once I've stepped and am ready to sample the light, it's just a matter of dividing by the w component (perspective divide?), and converting from -1 to 1 to 0 to 1 for xyz components, because the xy components let me sample the light's depth texture in 0 to 1 uv space, and the z component in 0 to 1 space matches the hardware depth buffer output. I don't know if any of this makes sense unless you basically already know what I'm talking about.
Basic steps for volumetric lighting that I'm using:
Render the scene to a hardware depth buffer from the perspective of the light (similar to any spot light)
From the scene's normal camera perspective, render the geometry of the light cone's front faces to a depth buffer. The light cone's geometry should fit inside the frustum created to render step 1. This depth buffer is half-sized in my case for efficiency.
This is the step where we actually do the ray marching. Render the same geometry from step 2, but back faces, which gives us the world position of the fragments of the back faces. Using clipspace, sample the depth buffer from step 2 to determine the depth of the front faces, and reconstruct world space from it. Now we have world space of the back faces and front faces for this fragment, so we know the start and end for where we are going to place our samples. Convert those to the light's projection space (using ViewProjection matrix used for step 1) and divide by the number of samples to get the step vector (a vec4). Step through, adding this vec4 step vector to the vec4 currentPosition, then do this kind of thing:
vec3 projectionCoords = currentPositionLS.x
I was casually reading about projective space when I read that two planes, parallel or not, meet only once creating a line. shouldn't they create infinity lines, kinda like a circle made of the pβ? Because if they expand in 2D and meet the other plane at β it should meet it in every pβ. Sorry for my English.
Unfortunately I am not a mathematician by training, but there is a question that I would like to know an answer to concerning geometry.
Is it possible to have a standard 3-dimensional euclidean space (about 3-D scenes for instance) and without additional information turn it into a Projective Space with 3 (+ 1) dimensions? I know getting a projected plane out of euclidean 3 space is possible. But is it possible mathematically to artificially add another dimension (e.g. by means of points at infinity being interpreted as the vantage point from which the euclidean 3-Space is viewed) and view the 3D objects from different perspectives in space, depending on the subsequent transformations?
I hope to find an answer on this or a hint at something where I can read about it!
I think the people on this sub are above average inteligente people, so I would like to hear your picks for the best projects for you in the wormhole of crypt space.
I was wondering if anyone has a way to motivate why projective space is used so much in AG, other than "all lines meet at a point" and "things work more nicely in projective space."
Xxxnifty - N$FW token
Super bullish about this news , news had 3 GREAT announcements in 1
Check out the official TG, to see where all the fuzz is about : https://t.me/xxxnifty_official
1οΈβ£ Team just announced 2 Top 10 Exchanges on the way!!
2οΈβ£ Launch of Alpha release of Pleasurely.com, xxxNifty's Adult Social Platform. (OnlyFans Social Like Platform, but way Better)
3οΈβ£ They added a new teammembers to the core team With lots of experience
βοΈ Largest NFT marketplace in their space
βοΈ700 Adult NFTs on their Marketplace
βοΈ100+ creators on the platform to date (no matter of gender anymore!) Adding more daily
βοΈ500+ NFT sales. Over 200 1of1's
βοΈ They launched the NFT marketplace i April 2021 and the token in may 2021 So the project is really moving forward and the devs are working. Full time on this project
βοΈ8 partnerships w/Agencies
βοΈ8 Brand Ambassadors, with Amouranth and NOFACEGIRL They have a huge social media followings , combined over 20 Million following
βοΈDeflationary Tokenomics benefit holders
βοΈDaily NFT sales
βοΈ$8 million MC, 2 working platforms utilizing the utility of their native [NSFW] token
1 : XXXnifty - NFT marketplace 2: Pleasurely- Social platform
βοΈXXXNIFTY is a registered business, meaning devs and team are all doxxed βοΈTechRate Audit approved
I came up with this derivation of the off-axis frustum matrix a while back, and I thought you guys might find it interesting. Of course there are other derivations out there, but I feel that this one lends more insight into what projective space is and why it is useful for graphics. It may or may not help you with visualizing homogeneous coordinates.
The post introduces projective space and discusses how it can be used to perform tasks that are not possible with matrices alone. Specifically, we explore its role in performing translations and removing intersections.
We then use two dimensional real projective space (x, y, w) to do a visual derivation of the 2D frustum transformation. The frustum matrix is decomposed into 3x3 rotation, skew, and scale matrices.
Finally we generalize the 2D transform to three dimensions.
Note that I posted this yesterday, but I failed at reading the posting guidelines.
I've been poking around learning about Severi-Brauer varieties and the concept of a Grassmann manifold came up. The concept is a generalization of projective spaces, where Gr(r,V) is the space of all linear subspaces of V with dimension r. So Gr(1,k^(n+1)) is isomorphic to P*kn
* for a field k, but for higher r the isomorphism isn't clear to me.
The reason I ask is because there's a definition that takes a subset of one of these spaces and turns it into a projective variety, and I'd like to know if that's because of the space it's embedded in or if it's because of a quirk of how the definition is set up.
In partenership with Arca Space, Aether helped with the testing and development of Arca Space ecological propulsion (EcoRocket). The Ecorocket will be launched on Octomber, 10. Moreover, on this occasion, Arca Space will launch Aether cryptocurrency in space.
EcoRocket is a three-stage orbital vehicle, with its first two stages fully reusable. The first two stages are using an environmentally friendly water- based propellant that will put the third stage at an altitude of 50km and a speed of 5,400km/h. The third stage, fuelled by RP-1 and High Test Peroxide, will then propel a payload of up to 10 kilograms into orbit.
AETHER will also develop and send a small satellite into orbit. The satellite will be called AETHER and it will be used by other crypto projects that have their own blockchain, to run nodes.
β Real project
β Partnership with an aerospace company
β Eco launch system developed for rockets
β First satellite running nodes in space
β First EcoRocket launched in space
β First token that will be really sent in space
π Tokenomics :
14,0% Transaction Tax
-3% redirected to our Project
-2,5% Buyback and Burn
-5% Aether reflexion back to holders (oldies but goldies. The contract will not sell in order to buy BNB's and again other coin with BNB's, so the chart will be clean and stable)
-3,5% to marketing & development of Aether
1,000,000,000 TS
π Website : www.aetheruniverse.com
π¬ Telegram : https://t.me/aether_BSC
π° Medium : https://medium.com/@Aether_Crypto
π¦ Twitter : https://twitter.com/Aether_Crypto
π Reddit : https://www.reddit.com/r/Aether_Crypto
π£ Discord :Β https://discord.gg/uvuArrUg
That really stretches my intuition. I am a recreational mathematician and I was wondering if there were better explainations than the one afforded by the wiki page.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.