Problem involving sampling without replacement and with replacement based on some criteria

I have a bag with 90 chocolates and 10 mints. If I draw a chocolate I eat it. If I draw a mint I put it back in the bag and draw again and eat whatever I draw on the next draw no matter what. Whats the probability the last item I daw is a mint?

I tried writing a simulation in python but am having difficulty as I need to randomly sample from a list with out replacement and then place the object back in the list if its a mint. Sampling without replacement in python is weird enough.

Wondering if there is an analytical way of doing this, or a better way to simulate?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/whiteboy2471
πŸ“…︎ Feb 26 2020
🚨︎ report
How To Calculate Probability of Sampling With Variable Odds AND Without Replacement

I'm having a statistics problem that I can't seem to figure out. I'm not a statistician and pretty much just know the basics here. The actual problem is a bit more complicated than the example I'm about to present, but the principle is the same:


Setup

Imagine a room with you and 4 other people.

In this room, there are three hats.

You and all the other people write your name on tiny pieces of paper.

You then each place a different number of pieces of paper into each hat:

Person 1 Person 2 Person 3 Person 4 (You) Person 5
Hat 1 (60 pieces) 18 pieces 15 pieces 12 pieces 9 pieces
Hat 2 (50 pieces) 16 pieces 13 pieces 10 pieces 7 pieces
Hat 3 (40 pieces) 14 pieces 11 pieces 8 pieces 5 pieces

To put that into percentages, this is how much of the hat each person takes up:

Person 1 Person 2 Person 3 Person 4 (You) Person 5
Hat 1 30% 25% 20% 15%
Hat 2 32% 26% 20% 14%
Hat 3 35% 27.5% 20% 12.5%

Sampling

  1. First, a name is pulled out of Hat 1 and is set aside in a cup. Then, all papers with that person's name are removed from Hat 2 and Hat 3.

  2. Then, a name is pulled from Hat 2 and is put in the cup. Then, all papers with that person's name are removed from Hat 3.

  3. Lastly, a name is pulled from Hat 3 and is put in the cup.


The Problem

Q. What is the total percent chance that you (Person 4) will have your name in the cup after this sampling?


My Logic Feel free to skip this if you aren't interested in hearing why I'm confused lol

Unfortunately, I can't wrap my head around how the variable odds of the second and third draws affect your total chances of "winning" and getting your name pulled.

In a "best case scenario" where Hat 1 is won by Person 1 and Hat 2 is won by person 2, you (Person 4) have a ~21.59% chance to win Hat 2 and a ~33.33% to win Hat 3.

So, I'd guess that the formula for this is the following (even though it's not possible to win two hats, this is the only formula I can think of for this):

  • P(winning) = P(winning Hat 1) + P(winning Hat 2) + P(winning Hat 3) - P(winning Hat 1 AND Hat 2) - P(winning Hat 1 AND Hat 3) - P(winning Hat 2 AND Hat 3) + P(winning Hat 1 AND Hat 2 AND Hat 3)

  • P(winning) = 15% + 21.59% + 33.33% - 3.24% - 5% - 7.2% + 1.08%

  • P(winning) = ~55.56%

#BUT

There is als

... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/TomLikesGuitar
πŸ“…︎ Apr 04 2019
🚨︎ report
Sampling with replacement - how long until everything has been sampled at least once?

Way back, I used a program called Audiograbber for ripping music from CDs. It came in two version; free and paid. The free version had a limitation: it would only allow half the tracks to be copied, and would select these tracks at random. Each time the disc was inserted it would select a new selection of half the tracks. To copy the full CD would need each track to have been selected at least once, and this could take many rounds. I've always wondered how I could work out the probability of the number of turns needed to get all the tracks. (I think there was a bias towards odd-numbered tracks, but for this assume all tracks have a equal chance of being selected each turn)

-you could get lucky and have all the tracks that were not in the first round selected in the second round. -alternatively, it is very unlikely but not impossible that after 100 rounds that they all have not been selected at least once.

different but similar scenario

* roll a d6 dice
* roll again
* how many rolls until all 6 numbers have come up at least once? 

at its simplest you could consider a coin toss; how to figure the probability that it will take 'x' tosses to get at least one head and one tail. I think for x=2 it would be 0.5, then 0.25 for x=3 and so on (since the probability of 2 heads or 2 tails for the first two tosses would be 0.5, the chances that the third toss would be different would be 0.5, giving a combined probability of 0.5*0.5 = 0.25 for exactly three tosses).

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/batguanoz
πŸ“…︎ Dec 30 2012
🚨︎ report
Direct sampling is super quiet. Running from mixer to Maschine. As you can see in pic one the wavform I got was mad quiet. I got this same result with my other turntables. Thinks its just shit cords? reddit.com/gallery/r4mjp8
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Goblinpipes
πŸ“…︎ Nov 29 2021
🚨︎ report
[S] Estimating Gradients for Discrete Random Variables by Sampling without Replacement shortscience.org/paper?bi…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/research_mlbot
πŸ“…︎ Feb 11 2020
🚨︎ report
Can an Octatrack be a suffiecent replacement for the sampling capabilities of ableton?

Hey synheads! I was just wondering, do you think an Octatrack would cover me for intense sampling of hardware and vinyl records? I currently use the sampler in Ableton but I don't find it very expressive. I'd be selling an Ableton Push to fund an Octatrack purchase. Is this a good move? I know it's very subjective but I'm hoping for some sound wisdom from grand masters. Cheers!

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/zentothetenth
πŸ“…︎ Jun 08 2015
🚨︎ report
Two American men interfere with the politics of an impoverished wartorn country, assassinating the ruling warlord while funding and installing an insurgency group as its replacement. Also, there’s a boat.
πŸ‘︎ 18
πŸ’¬︎
πŸ‘€︎ u/mlktwx
πŸ“…︎ Dec 01 2021
🚨︎ report
Completion replacements which work with `eval-expression`?

I often use M-: (eval-expression) to check or set variable values or as a calculator. But if I try to use completion there, I always end up with the *Completions* buffer. Is there any nice completion replacement package which deals with this use case? I think it’s a matter of supporting completion-at-point inside the minibuffer.

So far, I found that:

  • company-mode is not supported in the minibuffer.
  • Helm does have some support for this, but it’s a little wonky. Something about the way it handles spaces makes it unintuitive in this use case.
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/gepardcv
πŸ“…︎ Nov 24 2021
🚨︎ report
First foundation pan with sponge, replacement and a bonus deluxe sample pan
πŸ‘︎ 63
πŸ’¬︎
πŸ‘€︎ u/shinkirou_coon
πŸ“…︎ Aug 19 2019
🚨︎ report
Sampling my attempts at replicating neon signs with embroidery! it's really difficult but gives a super fun effect! this piece now lives on a baseball hat
πŸ‘︎ 443
πŸ’¬︎
πŸ‘€︎ u/broiderybb
πŸ“…︎ Mar 26 2020
🚨︎ report
Musique Concrete.101 : Sampling and sound manipulation (on tape recorders) with Delia Derbyshire v.redd.it/ycbkjxkw33p41
πŸ‘︎ 621
πŸ’¬︎
πŸ“…︎ Mar 26 2020
🚨︎ report
ProFiber, new replacement to an already Sated market! What is ProFiber? (with sample offer)

For those who don't mind reading in depth, this is a better read: ProFiber Reddit

TL;DR version below.

ProFiber is a meal replacement designed for those who are afflicted with several risk factors for heart disease. The main three are high blood sugar, high cholesterol, and a larger waistline. Of all meals and meal replacements, this is the highest in fiber, coming in at nearly 17grams. With an additional 21grams of protein, this keeps you full.

Pretty transparent graph showing our product

I, along with my partner, Dr. Silverman, have been working on this in Sarasota, FL for 2.5 years, with it being on market for just under a year. We wanted to test viability in an area that fit our target demographic, and now with a good retention and success rate, we can say it is viable.

The product itself comes in one flavor, original. We named it such because we honestly get mixed opinions as to what it tastes like nearly every time. A normal bag is $50 plus 10$ flat shipping - it's heavy at 15 servings. BUT for a limited time we are offering samples (only 100!) though it doesn't quite fit the financials.

I know you all are a well-informed group and wanted to get your opinion on it, so I've set up a sample product where you can get it for 1 cent plus $2 shipping right here. <-- THERE

In the document up top, I list who shouldn't try it. Don't want any false hope!

This is a really high quality product, with a great purpose, so I hope you guys are receptive. Would also love to answer any and all questions.

-Antonio

ANOTHER LINK TO ENTER

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/profiberantonio
πŸ“…︎ Mar 22 2019
🚨︎ report
Does anyone have any experience installing Akkon's First gen TSX replacement headlights? I'm having trouble with fitment of the HID ballasts.

Link

The ballasts don't seem to properly fit into the headlight housing and the holes for the screws don't align at all. does anyone know any ways I can make them fit, or are there different ballasts that I can use?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Duncan1297
πŸ“…︎ Nov 22 2021
🚨︎ report
UEFA Official on Phil Jones's doping sample: "He wanted to leave to celebrate with them team, but I told him that he had to do the sampling first. β€œHe then told me I was a f***er doing this and that I had a f***er’s job and how I could be so f***ing stupid to consider having such a f***ing job.”" fourfourtwo.com/features/…
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/se_rob
πŸ“…︎ Aug 23 2017
🚨︎ report
[R] Questions implementing "Deep Learning with Importance Sampling"

I am implementing the paper "Not All Samples Are Created Equal: Deep Learning with Importance Sampling" in PyTorch (arxiv, Keras implementation).

The paper

The idea of the paper is as follow:

  • Instead of selecting samples at random, you can train more efficiently by selecting the "hard" samples with higher priority
  • You can achieve this either by prioritizing proportionally to the loss, but this is not very accurate. Or you can use the gradient norm, but that's very expensive to compute
  • In the paper, they come up with a relatively tight upper bound of the gradient norm, which can be computed fast enough and results in a nice training speed-up. Cool!

The problem

The paper is very well written in general, and a full implementation is provided. I used to think this was the clearest paper ever, before I started working on a PyTorch implementation.

Now I realise that:

  • Some of the details in the paper are not clear (or maybe I'm just not familiar enough with the math behind ML)
  • The Keras implementation is super heavy, with many levels of inheritance which makes it hard to follow the flow of the algorithm to actually understand what's going on

How their upper bound works

So, the core of the paper appears on page 3-4 in this version.

They show that the l2-norm of the gradient (eq 16) is bounded by eq 19. I don't understand intuitively what the quantities expressed in eq14 and eq15 are, but I trust them that the inequality stands.

Pic of the relevant section

Then, they simplify this inequality into eq 20, with the following justification:

> Various weight initialization (Glorot & Bengio, 2010) and activation normalization techniques (Ioffe & Szegedy, 2015; Ba et al., 2016) uniformise the activations across samples. As a result, the variation of the gradient norm is mostly captured by the gradient of the loss function with respect to the pre-activation outputs of the last layer of our neural network.

So if I understand correctly, because of nice initialisation methods, plus batch-norm, layer-norm etc, we can assume that the norm of the gradient will be roughly the same at each layer.

(I'm not 100% sure my understanding is correct. Another interpretation of the part in bold would be that the norm of the gradient will be much larger at the last layer than at an

... keep reading on reddit ➑

πŸ‘︎ 84
πŸ’¬︎
πŸ‘€︎ u/MasterScrat
πŸ“…︎ Feb 26 2020
🚨︎ report
Western Digital Starts sampling 20 TB HDDs with SMR and 18 TB HDDs with CMR guru3d.com/news-story/wes…
πŸ‘︎ 38
πŸ’¬︎
πŸ“…︎ Dec 23 2019
🚨︎ report
Western Digital Starts sampling 20 TB HDDs with SMR and 18 TB HDDs with CMR guru3d.com/news-story/wes…
πŸ‘︎ 85
πŸ’¬︎
πŸ‘€︎ u/avrebirth
πŸ“…︎ Dec 21 2019
🚨︎ report
Any idea what these are that came with a pack of replacement accessories for my roborock s5 max? tried fitting them in those holes on the mop bracket but they don't fit.
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/paulr70
πŸ“…︎ Oct 03 2021
🚨︎ report
[Rovell] New Era takes its biggest risk yet on its NFL deal with this year’s draft hats. A mix of great and horrible. Here’s a sampling... twitter.com/darrenrovell/…
πŸ‘︎ 122
πŸ’¬︎
πŸ‘€︎ u/Stauce52
πŸ“…︎ Apr 02 2019
🚨︎ report
Does bagging (specifically the choice to sample with replacement) lead to better random forests?

I use ML algorithms, and have some stats knowledge, but didn't do a stats degree. I hadn't ever worried too much about the use of bagging in random forests, but I did today, and I can't work out why it matters if we sample with replacement or not. I am not that knowledgeable about bootstrapping - I know the basics, but haven't studied the topic in detail.

Let's take two scenarios. I have a data set with n observations, and I either

- build a standard random forest, using bagging to randomise the training data used in each tree. To keep the examples the same, lets say we opt to create training data sets that contains 60% of the total number of observations in my original data set, but as we're bagging, these are sampled with replacement

- build a random forest, where rather than bagging, I sample 60% of my data for each tree without replacement

And we keep everything else the same.

Does the second scenario result in a worse result? And if so, is this universally true, or would it apply to certain distributions of data? Does sample size matter? What is it about bootstrapping that makes this true?

My naive assumption is that whilst I understand that sampling with replacement will lead to different training data sets because there will be duplicates, it doesn't follow that this leads to a better random forest.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/fissionfish789
πŸ“…︎ Oct 27 2018
🚨︎ report
DAQmx Sampling Rate Precision

Hi everyone !

I'm doing some data collection with LabView and I have a hard time understanding how DAQmx works.

I'm collecting 6000 data at 100 Hz using continuous collecting (and yes I will not changes it to finit for some more complicated reasons). I'm reading 4 data at the time, will do some calculations and will be sending the results to separate loops (producer consumer model) to log is and display it on the front panel.

Here is an example

https://preview.redd.it/us67eo03bjm71.png?width=1363&format=png&auto=webp&s=f28bcbb2f936fbd1ad482cbfe5787392266fc220

I know that there can be some delay with the front panel update or logging. I don't care with that, it's minimal and working well with notification and queue.

My questions are the following:

  1. I've read that sampling rate can be slow down when the is memory issue. I know that graph updates, for instance, can slow down a loop, but could someone explained how it can slow down the DAQ card sampling rate ? I thought that, with the buffer, even if the loop is slowdown, the DAQ is still collecting data base on his own sample clock. Thus a loop of 10 ms could be slowdown, for some raisons, to 100 ms, while the DAQ will put in his buffer 10 data even if there was only one iteration. In other word, data collection is not affected thanks to the buffer, but it's the reading that is affected. At leas, this is my understanding. I'm I wrong ?
  2. If sample rate can indeed be slowdown at the DAQ level, is there a way to verify the precision of my sample rate ?

Thanks everyone for your time !

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/The_Dead_Parrot
πŸ“…︎ Sep 09 2021
🚨︎ report
Well last night was pretty amazing, had 2 smoke sessions with Abacus from 8 🐎 and was extremely impressed! Today I am sampling the C-5. Took a few dry puffs and seems like it would be tasty. I'm about to find out in a few minutes! I'll be back with results! Where you guys from? Stl here...πŸ˜πŸ’¨
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/mcbstl345
πŸ“…︎ Feb 27 2020
🚨︎ report
Rewind! A little sampling of Spaced with the PO-33 youtu.be/46JXzH9iMxk
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/primoferal
πŸ“…︎ Apr 07 2020
🚨︎ report
Sampling a super cool song with the SP404SX v.redd.it/o9ztk45e8zl41
πŸ‘︎ 71
πŸ’¬︎
πŸ‘€︎ u/tagoi12438
πŸ“…︎ Mar 11 2020
🚨︎ report
Sampling Animal Crossing New Horizons with PO-33! (part 1) youtu.be/d5hWGc7BQe8
πŸ‘︎ 37
πŸ’¬︎
πŸ‘€︎ u/Freebeatonyoutube
πŸ“…︎ Mar 21 2020
🚨︎ report
Does this count?...."You don't know what I believe because I didn't say" -- Small sampling of me interacting with Trumpsters on FB yesterday
πŸ‘︎ 71
πŸ’¬︎
πŸ‘€︎ u/kaptainkory
πŸ“…︎ Feb 23 2020
🚨︎ report
China develops robot for throat swab sampling. Test results showed that the robotic throat swab sampling could achieve high quality results, with a one-time success rate of more than 95 per cent. straitstimes.com/asia/eas…
πŸ‘︎ 32
πŸ’¬︎
πŸ“…︎ Mar 10 2020
🚨︎ report
Sampling old Japanese record with PO-33 K.O. | SaEv youtube.com/watch?v=2E9yd…
πŸ‘︎ 86
πŸ’¬︎
πŸ‘€︎ u/quoban
πŸ“…︎ Nov 11 2019
🚨︎ report
Replacement of "noong" in a sample sentence with "kapag", "kung", and "nang" and necessity of "pa" with continuous aspect

Natutulog pa ako noong tumawag ang kapatid ko.

[I was still sleeping when my sibling called/telephoned me.]

Can the same meaning be expressed by replacing the noong with kapag, kung or nang thus:

Natutulog pa ako kapag tumawag ang kapatid ko.

Natutulog pa ako kung tumawag ang kapatid ko.

Natutulog pa ako nang tumawag ang kapatid ko.

With respect to the use of pa, is it used to enhance the vividness of the action of sleeping or does it serve another function such as a contradistinction to the normal situation when the writer's slbling calls?

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/numquamsolus
πŸ“…︎ Aug 28 2018
🚨︎ report
First snow of the year and jarring the last honey harvest of 2019 while sampling some of the meade I made with 2017 honey. imgur.com/ffaROus
πŸ‘︎ 184
πŸ’¬︎
πŸ‘€︎ u/verpine
πŸ“…︎ Nov 11 2019
🚨︎ report
Dry trimming some Sweet Island Skunk from Next Generation Seeds - awesome plant with some nice yield - buds are going to cure in mason jars for 3-4 weeks with some sampling in a few weeks....
πŸ‘︎ 25
πŸ’¬︎
πŸ‘€︎ u/AdanacDJM
πŸ“…︎ Jan 18 2020
🚨︎ report
That moment when Lush sends you samples you didn’t ask for with the replacement soap they didn’t pack in your original order, and they’re the ones you would have wanted if you picked!
πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/hmarie92
πŸ“…︎ Oct 17 2017
🚨︎ report
Shkodran Mustafi has reached the agreement with Arsenal to terminate his contract, confirmed! Schalke are set to sign him as Kabak replacement. Last steps as Liverpool are waiting to complete Kabak deal on loan with buy option. πŸ”΄πŸ #AFC #LFC #S04 #DeadlineDay twitter.com/FabrizioRoman…
πŸ‘︎ 847
πŸ’¬︎
πŸ‘€︎ u/MrIrishman699
πŸ“…︎ Feb 01 2021
🚨︎ report
Creating a track with Deluge, Digitone and OP-1: Deluge's sequencer is super convenient, Digitone's FM sounds are deep and OP-1 is just... You never know what comes out when you start sampling radio. youtube.com/watch?v=OgQVz…
πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/WHA-LES
πŸ“…︎ Feb 08 2020
🚨︎ report
Trippy shader sampling with Voronoi perturbations v.redd.it/ftlmktw40ej41
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/professormunchies
πŸ“…︎ Feb 27 2020
🚨︎ report
Because I saw someone else with ambient lights sampling colours from the monitor, so I set some up aswell
πŸ‘︎ 131
πŸ’¬︎
πŸ‘€︎ u/tjs247
πŸ“…︎ May 13 2019
🚨︎ report
[Dreger] The expectation is Eichel will have disc replacement surgery very soon. Recovery time varies. Everyone hopeful he will be back on the ice in 4 months. twitter.com/DarrenDreger/…
πŸ‘︎ 1k
πŸ’¬︎
πŸ‘€︎ u/DecentLurker96
πŸ“…︎ Nov 04 2021
🚨︎ report
I make fractals in Blender using custom OSL scripts and I was quite happy with this render. There is still a lot of noise and volume sampling artefacts but it's already a lot for my little laptop
πŸ‘︎ 299
πŸ’¬︎
πŸ‘€︎ u/loic_vdb
πŸ“…︎ Jan 27 2019
🚨︎ report
The OP-Z's sampling update was already a huge improvement and the most recent one with the new LFO settings is yet again opening so many doors. Would highly appreciate if you have some creative tips about the new LFO options! youtube.com/watch?v=ayn5_…
πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/WHA-LES
πŸ“…︎ Nov 29 2019
🚨︎ report
Sampling night with the guys
πŸ‘︎ 26
πŸ’¬︎
πŸ‘€︎ u/Mr_Rice-n-Beans
πŸ“…︎ Jan 11 2020
🚨︎ report
Lofi Sampling and looping with the Akai mpk 249 in Logic Pro v.redd.it/ryzcgtopsst31
πŸ‘︎ 74
πŸ’¬︎
πŸ‘€︎ u/tagoi12438
πŸ“…︎ Oct 21 2019
🚨︎ report
Working with ESP32 Audio Sampling toptal.com/embedded/esp32…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/y_tan
πŸ“…︎ Mar 26 2020
🚨︎ report
Finally got audio sampling via DMA with no CPU working - Example VU Meter [+code] v.redd.it/5q8u6lz551v21
πŸ‘︎ 103
πŸ’¬︎
πŸ‘€︎ u/davepl
πŸ“…︎ Apr 28 2019
🚨︎ report
Super Sampling With New SteamVR Settings + Advanced Settings [GUIDE]

Hi again /r/Vive!

In lieu of a lot of recent confusion (and frustration from devs), I made a video that runs through how I super sample with the recent update in Steam VR's settings:
https://youtu.be/HhLfU8OhI4E

The video explains what super sampling is, how to super sample, and how to monitor performance ensuring you are super sampling correctly based on your specific PC's setup.

If you want to skip the 'understanding' part and just hit the settings part:
https://youtu.be/HhLfU8OhI4E?t=4m55s

If you wanna skip the vid altogether:

Essentially:

  1. Set global SS to 100%
  2. Set Steam VR Home per app SS value to Steam VR recommended setting
  3. Super sample apps individually
  4. Monitor performance and adjust accordingly

Super sampling in Steam VR (has benefit of per application super sampling):

On desktop go to Steam VR settings
On the Application tab pick an app from the drop down list and the slider will affect the super sampling of just that selected app
On the Video tab tick the 'Manual Override' box and this slider will affect the super sampling of every game/app running through Steam VR

My advice:

  1. Take note of the recommended percentage on the 'Video' tab and then set that slider to 100% and leave it at that
  2. Go to the 'Application' tab, choose 'Steam VR Home' from the drop down list and then set that slider to whatever percentage was originally on the 'Video' tab
  3. Select any other game/app from the drop down list you want to super sample, then set the slider to the value you think your PC can handle for that app
  4. Back on the 'Video' tab, click on the 'Display Frame Timing' button
  5. Play your game/app, if you see red in the graph constantly then ease off on the super sampling for that app (back on the 'Application' tab) until you're within your PC's limit
  6. Can also tick the 'Show in Headset' box in the performance graph and the in VR tilt your right hand over to see that graph in VR

Super sampling in Advanced Settings (has benefit of changing settings from in VR):

  1. Download and install app from here: https://github.com/matzman666/OpenVR-AdvancedSettings/releases
  2. Open the Steam VR dashboard in VR, click on 'Advanced Settings'
  3. On the Steam VR tab use the 'Application Supersampli
... keep reading on reddit ➑

πŸ‘︎ 226
πŸ’¬︎
πŸ‘€︎ u/f4cepa1m
πŸ“…︎ Apr 26 2018
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.