Sign Off STA (Static Timing Analysis) Training Courses VLSI vlsichip.in/sta-static-ti…
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/vlsichipbang1
πŸ“…︎ Nov 02 2019
🚨︎ report
Reducing debugging time while programming. Use static and stack-trace analysis to determine which func call causes that error. github.com/snwfdhmp/errlo…
πŸ‘︎ 61
πŸ’¬︎
πŸ‘€︎ u/valentin_michou
πŸ“…︎ Dec 03 2019
🚨︎ report
Static analysis in GCC 10 developers.redhat.com/blo…
πŸ‘︎ 173
πŸ’¬︎
πŸ‘€︎ u/unaligned_access
πŸ“…︎ Mar 27 2020
🚨︎ report
Write-up about code quality with Ansible. In short: smoke-tests; static analysis; defensive techniques; DRY and KISS; and much more. sysdogs.com/en/on-code-qu…
πŸ‘︎ 70
πŸ’¬︎
πŸ‘€︎ u/limakzi
πŸ“…︎ Apr 24 2020
🚨︎ report
Thoughts on applying static analysis to a large code base

So, going through the large process of applying the Visual Studio static analysis tool to some large code bases, I figured I'd just throw out some thoughts about the good, bad, and ugly that come out of that. Maybe it'll be helpful to someone... These are in no particular order, and of course I could be missing something about some of these that would make them less onerous than I've been assuming they are.

And any of them can be avoided by just turning off those checks, but that sort of defeats the purpose. The goal, hopefully, is to get the benefits of the analysis without having to write ridiculous code just to make the analyzer happy, which is often difficult. Every suppression you add means that some change later on that is a legit error will just get ignored because you've suppressed that warning in that bit of code.

Objects passed to callbacks

The analysis tool has to see every pointer being set or it warns about it. If you do callbacks, for predicates or for-each type things, even if the actual data is pointers, pass them as references. Else, the analyzer will whine about every such loop and want to you to do a null test even though it's never going to be null.

Raw Arrays

Though I'm not remotely a purist, and my code goes from quite low level up to very high so I have plenty of legit places down very low where I have to use raw arrays, they obviously come at a cost for static analysis purposes, since the analyzer in VS seems to have very limited ability to reason about array indices and you typically will have to index them using magic templates that suppress the warnings (and possibly range check them), making your code much less readable in the process.

Indexing Operators in General

The analyzer assumes all [] type accesses are unchecked unless it actually sees otherwise, and of course (for some bizarre reason) even the ones in the std array and vector are actually unchecked. So every single one of them or any raw array indexing will be complained about. If you use the STL stuff you can use the .at() method to avoid these, but of course that's not nearly as readable. For me, my index operators are range checked and the analyzer can see that code since it's templatized, so I think I've gotten around this mostly.

Some people seem to think indexed loops are evil of course, but that's silly. There are many places where you inherently need the index and having to calculate it or separately maintain it is just crazy (and more moving parts

... keep reading on reddit ➑

πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/Dean_Roddey
πŸ“…︎ Nov 28 2019
🚨︎ report
IntelliJ IDEA's Static Analysis vs. the Human Brain blog.jetbrains.com/idea/2…
πŸ‘︎ 162
πŸ’¬︎
πŸ‘€︎ u/dpash
πŸ“…︎ Nov 20 2019
🚨︎ report
What's the risk of deploying a bitstream, which fails multi-corner timing analysis, to the production environment?

I am running an FPGA Network Interface Card in production.
My design has a very tight timing constraint. Usually, I have to full build 8-10 times (build/synthesis/fitting) to get a bitstream which passes multi-corner timing analysis. The slow 900mv 85'c model always fails with -ve slack in Quartus Timing Analyser. By multiple compilations, I have 10-20% chance to get a good bitstream.
Last month I have to hotfix a production bug which incurs a slight change in RTL. I quickly fixed that but I have no time to build a good bitstream. It takes 2.5 hours to build once. I tried building 3-4 times a day but still failed to meet the timing constraints. In the end, I deployed the bitstream, with the worst path -0.09 slack in slow 85C model, to production as a hotfix.

The hotfix runs in production for two days without any issues. Lastly, I deployed a good bitstream to production on 3rd day.

So what's the risk (and more importantly how to mitigate the risk) of using a design which fails in multi-corner timing analysis?

πŸ‘︎ 27
πŸ’¬︎
πŸ‘€︎ u/kentsang77
πŸ“…︎ Feb 12 2020
🚨︎ report
is there a way to get this static version of the default dynamic wallpaper? (4am timing)
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/asdfjhkasdhf
πŸ“…︎ May 11 2019
🚨︎ report
[Serious] Comparing The Timing of the JT Miller Trade to the 5 Most Successful Teams: An analysis

Note: I won't try to censor anyone, but I am gonna request that this be kept a serious discussion about hockey. Try to remove your feelings about Benning and about other posters, because there's a lot to be talked about without getting into the petty us vs. them stuff. Also, before you start slinging mud at me for my bias, know that I tried to keep my bias from interfering with my findings; I started with a question, devised a way to find the answer, and found it. Feel free to disagree or even point out where I went wrong, but any accusation of me being a "Benning hater" or "pushing an agenda" will be taken with zero seriousness.

---

Comparing The Timing of the JT Miller Trade to the 5 Most Successful Teams

I wanted to see whether I was overreacting to the timing of that 1st round pick trade, so I did a little research about the successful teams of the last decade. I wanted to see see how the most perennially successful franchises handled trading 1st round picks during the timeline of their developing cores.

Research Question

In comparison to how the most successful franchises in this league built up their rosters, was Vancouver too hasty in trading away a 1st round pick for immediate roster help?

Methodology

First, what defines a successful team? This website breaks down the "winningest" teams in the last 10 years through regular season wins. The top 3 teams are Pittsburgh, Washington, and St. Louis. I decided to analyze these teams because of their regular season success, which leads to post-season appearances. I also decided to analyze Chicago (#6) and LA (#11) because of their obvious Cup-winning ways.

Second, to examine the development of a team's core, I first had to identify each core's "cornerstone" pieces - those players who led the team to their success - and see when they were each drafted. These pieces were: Crosby, Malkin, Fleury; Ovechkin, Backstrom, Kuznetzov; Backes, Pietrangelo, Tarasenko, Binnington, Parayko; Toews, Kane, Seabrook, Keith; Kopitar, Doughty, Quick.

Third, I took to www.nhltradetracker.com to find 1st round trades that the teams made after the cornerstone pieces had all been drafted. Once I saw that a team had traded away a 1st rounder, I looked to see what their roster construction was like. Note that I include a lot of names on the roster construction outside of these cornerstone pieces. This i

... keep reading on reddit ➑

πŸ‘︎ 60
πŸ’¬︎
πŸ‘€︎ u/OhNoYouDiUnn
πŸ“…︎ Jun 23 2019
🚨︎ report
I’m trying to do a static structural analysis on welsim and I’m watching a bunch of videos and following exactly what they do, but when I hit β€œcompute” it returns this error
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Jackmiller58
πŸ“…︎ Apr 14 2020
🚨︎ report
Static analysis tools you use in CI for your cpp projects?

Hey everyone, I've been tasked with finding out what kinds of tooling different programming communities use. I have found a similar thread dated 2017 here, but obviously there've been changes over the last three years. As such, people advised the OP to get any static analyzers they can get their hands on and integrate them into their CI β€” yet no many specific namedrops.

So the question is when it comes to CI pipeline in your C++ projects, which checkers do you use? I guess clang-tidy is pretty much a given, but anything else specifically? I know I could just go and google a list of best static analyzers for C++, but what I'm interested in is what tools people actually use for their projects.

Also, why are you using specifically those tools and not the others? Is there anything missing, some needs that weren't covered by your tools just yet? Are there any things you have to integrate over and over again for many projects in order to keep your C++ codebase neat and less error-prone?

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/Alena_dev
πŸ“…︎ Mar 09 2020
🚨︎ report
Typehint and return type checker. typhp is a simple tool checks whether type hint for arguments or return type declared. Unlike static analysis tools, it doesn't point out possible errors and issues but suggests typehint everything possible. https://github.com/seferov/typhp github.com/seferov/typhp
πŸ‘︎ 39
πŸ’¬︎
πŸ‘€︎ u/MrSafarov
πŸ“…︎ Oct 17 2019
🚨︎ report
"RAM overclocking" should you tighten sub-timings for gaming ? In-depth frametimes analysis

This is part 1 of the test. It take 3-4 days just to analyse frametimes in just 4 games. Part 2 coming soon.

Some of you guys probably know how to overclock RAM and how to adjust primary timings.But how about optimizing sub-timings like secondary and tertiary timings for gaming ? Let's find out is it worth it...

Test system

i7-8700K @ 5Ghz core and 4.8Ghz uncore

ASRock Z370 Taichi P4.00

2x8GBDDR4-3500 16-18-18-36-2T (dual ranks double side Hynix AFR)

EVGA GTX 1080 Ti @ 2126 core / 12474 mem

Corsair HX 750W

NZXT H440 White

Custom Water Cooling

Windows 10 64 bit 1607

Nvidia 430.64

Record by ShadowPlay

Wait. WTF at the end of each games ? That is the main topic of today. In-depth frametimes analysis.

I feels that this test is deserve for a ton of effort of frametimes analysis.

Most of you guys are probably know what is AVG FPS , 1% Low and 0.1% Low.

The next graph is frametimes graph. It show us about smoothness.

The next one is frametimes by percentile graph. Show about frametimes from average (50th percentile) to the most important 99th (1% Low) and 99.9th percentile (0.1% Low).

Pay attention that from 50-95 I divide each scale to 5 while 95-99.9 each scale is just 1 because that areas are the most important metric for smoothness.

Next is Time spent beyond ...ms , it tell us about how much times that the frame render exceed certain milliseconds.

You guys are probably familiar with those numbers.

50ms mean 20 FPS (1000/20=50)

33.3ms mean 30 FPS (1000/30=33.33)

16.67ms mean 60 FPS (1000/60=16.67)

10ms mean 100 FPS (1000/100=10)

8.33ms mean 120 FPS (1000/120=8.33)

6.94ms mean 144 FPS (1000/144=6.94)

Why is this graph important ? It can tell us about smoothness in another dimension.

If you want solid 60 FPS "zero" is the best number that should follow 50ms , 33.33ms and 16.67ms graph.It mean that no frame take time to render more than 16.67ms.

I really hope you enjoyed my test.

If you want to watch side by side comparison of this test please visit

part 1 https://www.youtube.com/watch?v=TzkcT1mjLpw

part 2 https://www.youtube.com/watch?v=g9pV6XI0ADI

https://preview.redd.it/1encf8i0tx231.png?width=1208&format=png&auto=webp&s=2e8aa086bc31fda3f67aeec6ec610b49491630b2

https://preview.redd.it/02w2wa12tx231.png?width=1213&format=png&auto=webp&s=1af669effa9c055eb44030a98a48d62d346f1664

https://preview.redd

... keep reading on reddit ➑

πŸ‘︎ 99
πŸ’¬︎
πŸ‘€︎ u/Enterprise24
πŸ“…︎ Jun 07 2019
🚨︎ report
"Introduction to Static Analysis: An Abstract Interpretation Perspective"

This book was recently published, and is presented as "A self-contained introduction to abstract interpretation-based static analysis, an essential resource for students, developers, and users."

Did anyone here have the chance to read it? I did not find any evaluation of it so far, and I'm eager for state of the art, practical books on this matter.

πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/oparisy
πŸ“…︎ Mar 11 2020
🚨︎ report
i have just completed analysis on the LP4 teaser, and after calculating the derivative of the number of transients multiplied by the spectral path of the static undertones (reversed of course), I can confirm that LP4 will have a dunga.
πŸ‘︎ 136
πŸ’¬︎
πŸ‘€︎ u/Boldhams
πŸ“…︎ Oct 22 2019
🚨︎ report
Tenkawa - Language server for PHP, with powerful static analysis and type inference. github.com/tsufeki/tenkaw…
πŸ‘︎ 47
πŸ’¬︎
πŸ‘€︎ u/gridderer
πŸ“…︎ Aug 26 2019
🚨︎ report
which static analysis tool can detect bug like this

It is a very typical bug,

multimap<int,int> test;

test.emplace(1, 3);

test.emplace(3, 3);

test.emplace(3, 4);

auto range = test.equal_range(3);

for (auto i = range.first; i != range.second; ++i) {

if (i->second == some_value) test.erase(i);

}

first of all, the erase(i) part will invalidate i and then ++i is UB. But I tried a couple of tools, like CppCheck, Clang tool, valgrind. None of them report the bug. is there any suggestions? (or maybe I did not use them correctly?)

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/cpei2019
πŸ“…︎ Nov 27 2019
🚨︎ report
Open source PlatformIO Core 4.1 is out with Firmware and Memory Inspection, Static Code Analysis community.platformio.org/…
πŸ‘︎ 65
πŸ’¬︎
πŸ‘€︎ u/ikravets
πŸ“…︎ Nov 07 2019
🚨︎ report
On the Relationship Between Static Analysis and Type Theory semantic-domain.blogspot.…
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/mttd
πŸ“…︎ Aug 25 2019
🚨︎ report
Open source PlatformIO Core 4.1 is out with Firmware and Memory Inspection, Static Code Analysis community.platformio.org/…
πŸ‘︎ 67
πŸ’¬︎
πŸ‘€︎ u/ikravets
πŸ“…︎ Nov 07 2019
🚨︎ report
List of Haskell static code analysis software github.com/razvan-flavius…
πŸ‘︎ 35
πŸ’¬︎
πŸ‘€︎ u/razvanpanda
πŸ“…︎ Sep 16 2019
🚨︎ report
Static Analysis with Psalm PHP twilio.com/blog/static-an…
πŸ‘︎ 40
πŸ’¬︎
πŸ‘€︎ u/Spabby
πŸ“…︎ Oct 18 2019
🚨︎ report
I Open-Sourced Florentino; A cross-platform file analysis framework. useful for extracting static resources from malwares and unknown file analysis. github.com/0xsha/florenti…
πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/0xsha
πŸ“…︎ Mar 05 2020
🚨︎ report
C# Static Analysis Tool Roslynator.Analyzers Now Has over 500 Ways to Improve Code infoq.com/news/2020/01/ro…
πŸ‘︎ 150
πŸ’¬︎
πŸ‘€︎ u/grauenwolf
πŸ“…︎ Feb 02 2020
🚨︎ report
Static timing a '76 FI. Everything tells me to go with the case seam, but what's this line for?

EDIT : Likely solved

Here's a pic

My case doesn't look like a regular case I've been able to find online. There's a definite case seam that I could base timing off, but does anyone with experience know if I should be going off that line above the notch in this pic? The case seam doesn't line up with that line at all. You can see my seam to the left a bit.

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/JohnBenchCalled
πŸ“…︎ Jun 20 2017
🚨︎ report
Static analysis of an unknown compression format (2012) blog.lse.epita.fr/article…
πŸ‘︎ 54
πŸ’¬︎
πŸ‘€︎ u/corysama
πŸ“…︎ Nov 01 2019
🚨︎ report
Write-up about code quality with Ansible. In short: smoke-tests; static analysis; defensive techniques; DRY and KISS; and much more. sysdogs.com/en/on-code-qu…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/limakzi
πŸ“…︎ Apr 24 2020
🚨︎ report
School closures may potentially reduce influenza transmission, optimum strategy (length and timing) is unclear: Meta-analysis of epidemiological studies bmjopen.bmj.com/content/b…
πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/neuuroklan
πŸ“…︎ Mar 17 2020
🚨︎ report
Preventing "Variable used before definition" errors using static analysis?

In the following code:

def taste(color):
    if color == "orange":
        flavor = "tangy"
    elif color == "pink":
        flavor = "sweet"
    print("The flavor is", flavor)

This code will fail if the color entered is, say, 'blue', with the following error:

>>> taste('blue')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 6, in taste
UnboundLocalError: local variable 'flavor' referenced before assignment

What I am trying to determine if there are any code analysis tools that will recognize the potential for UnboundLocalError before runtime. It appears that neither pylint nor pyflakes will catch this error. There must be some code analysis tool that does this. Ideally one that is not focused on types (i.e. does not required type hints/annotation in the source code).

πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/jordanreiter
πŸ“…︎ Aug 13 2019
🚨︎ report
Static Code analysis for R

Hello Fellow members, I am using RStudio-1.2.5033 and R version 3.3.2 (2016-10-31). I am looking for a tool or a package that does static analysis of ".R" files . I was wondering whether there is a package like there is pylint, pyflakes etc. for python.

I did my research on this and found a Package called ' CodeDepends' but that doesn't support version-3.3.2 and found another one called 'codetools'. Right now I am looking at the 'codetools' package and seeing how that works with a ".r" file.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/newpythoncoder
πŸ“…︎ Mar 03 2020
🚨︎ report
Open source PlatformIO Core 4.1 is out with Firmware and Memory Inspection, Static Code Analysis community.platformio.org/…
πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/ikravets
πŸ“…︎ Nov 07 2019
🚨︎ report
How close FEA simulation is to a Real life results ( Linear static stress analysis)

I am a recent mechanical engineer graduate and I am highly interested in CAD design and FEA analysis, so I would like to get some insights on Finite Element Analysis from experienced industry people)

I understand that there are many factors that influence the results of FEA simulation ( in this case i am talking about Linear Static Stress Simulation). Things like mesh size, singularities and other can make a remarkable impact on results but assuming no major errors have been made, then how close to the results a same component in real life would act if it had same geometry. Is there industry standards on how to judge simulation accuracy or a prototyping is needed to completely varify the validity of simulation?

The main concerns for me are how much the "imperfections" of materials and manufacturing process have on performance on mechanical parts. Also, you can cast metal parts and also forge them which will result in relatively different component performance so can you be certain to a degree when looking at simulation results?

( to run simulations I run HyperWork software)

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/zeritom
πŸ“…︎ Sep 22 2019
🚨︎ report
Jumpshot timing analysis - Gary Payton & Luke Kennard

So I posted a little about the basic jumpshot analysis I've done for myself and I wanted to refine that and expand a little. Unfortunately I'm busy and this is tedious so I didn't get as far in as I wanted. Also, I do not have Ray Allen's jumper because I don't have a backcourt shooter (doh). But I went with the second choice of Gary Payton to compare to my own jumper.

Methods

  1. I used a CronusMax with 5ms increments - I'll probably write something better at some point but it was the most convenient for me not to reinvent the wheel and set up a timing adjustment button yet. If I decide to do more I'll improve it :)
  2. As noted I only went to 5ms intervals. That seems to be where all the break points are oddly enough anyway.
  3. I only count pipped jumpers obviously, no late greens.
  4. I stood still and shot from non-dribbling after resetting from a pumpfake, and also compared to neutral dribbling - these are the same except there is a timing variance if you shoot at the top of your dribble it's slightly faster by 5ms roughly

Results

Well, my boy Luke Kennard's jumpshot that I use eats a big fat bag of dog dicks.

Gary Payton's Jumper greens from 465 to 505 ms with a 40ms window.

Luke Kennard's jumper greens from 495 to 525 ms with a 30 ms window.

So not only is Luke's shot nearly 10% slower it also gives up 10ms of green window. So I'm switching I guess. I tested Luke's down to 1ms on the late end and it broke at 528ms or 526ms I can't remember (didn't do any writing then).

I am sorry I am not deleting one of my guys to make a backcourt shooter to get Ray's, but you can compare it to Gary Payton and see. It looks much faster from what I've seen online.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/pokk3n
πŸ“…︎ May 21 2019
🚨︎ report
"Introduction to Static Analysis: An Abstract Interpretation Perspective" /r/Compilers/comments/fgw…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/oparisy
πŸ“…︎ Mar 11 2020
🚨︎ report
Timing and static effects in this game?

Hey everyone I just had a quick question. So cards like dawnwalker that require a 5+ power creature to be played to get its effect - when does it check for its condition?

For example if I play ravenous thornbeast and sacrifice a creature will this trigger dawnwalker?

Another case, would be if I play a 4 power creature with xenan oblisk does that trigger dawnwalker?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/worstchemistNA
πŸ“…︎ Feb 08 2018
🚨︎ report
A Proposal for IDisposable and Static Analysis: DisposeUnused Attribute infoq.com/news/2019/10/ID…
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/grauenwolf
πŸ“…︎ Oct 11 2019
🚨︎ report
What PHP can be: thoughts on strong types, generics and static analysis stitcher.io/blog/what-php…
πŸ‘︎ 74
πŸ’¬︎
πŸ‘€︎ u/brendt_gd
πŸ“…︎ Apr 16 2018
🚨︎ report
Type and timing of menopausal hormone therapy and breast cancer risk: individual participant meta-analysis of the worldwide epidemiological evidence thelancet.com/journals/la…
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/mrsuperguy
πŸ“…︎ Aug 29 2019
🚨︎ report
Exakat is an incredible static analysis tool, I don't see it mentioned enough. github.com/exakat/exakat
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/MaxGhost
πŸ“…︎ Sep 08 2019
🚨︎ report
Should you tighten sub-timings for gaming ? In-depth frametimes analysis

This is part 1 of the test. It take 3-4 days just to analyse frametimes in just 4 games.

Some of you guys probably know how to overclock RAM and how to adjust primary timings.But how about optimizing sub-timings like secondary and tertiary timings for gaming ? Let's find out is it worth it...

Test system

i7-8700K @ 5Ghz core and 4.8Ghz uncore

ASRock Z370 Taichi P4.00

2x8GBDDR4-3500 16-18-18-36-2T (dual ranks double side Hynix AFR)

EVGA GTX 1080 Ti @ 2126 core / 12474 mem

Corsair HX 750W

NZXT H440 White

Custom Water Cooling

Windows 10 64 bit 1607

Nvidia 430.64

Record by ShadowPlay

https://imgur.com/a/JwvWlyw

if you want to watch side by side comparison please visit

https://www.youtube.com/watch?v=TzkcT1mjLpw

Wait. WTF at the end of each games ?

I feels that this test is deserve for a ton of effort of frametimes analysis.

Most of you guys are probably know what is AVG FPS , 1% Low and 0.1% Low.

The next graph is frametimes graph. It show us about smoothness.

The next one is frametimes by percentile graph. Show about frametimes from average (50th percentile) to the most important 99th (1% Low) and 99.9th percentile (0.1% Low).

Pay attention that from 50-95 I divide each scale to 5 while 95-99.9 each scale is just 1 because that areas are the most important metric for smoothness.

Next is Time spent beyond ...ms , it tell us about how much times that the frame render exceed certain milliseconds.

You guys are probably familiar with those numbers.

50ms mean 20 FPS (1000/20=50)

33.3ms mean 30 FPS (1000/30=33.33)

16.67ms mean 60 FPS (1000/60=16.67)

10ms mean 100 FPS (1000/100=10)

8.33ms mean 120 FPS (1000/120=8.33)

6.94ms mean 144 FPS (1000/144=6.94)

Why is this graph important ? It can tell us about smoothness in another dimension.

If you want solid 60 FPS "zero" is the best number that should follow 50ms , 33.33ms and 16.67ms graph.It mean that no frame take time to render more than 16.67ms.

I really hope you enjoyed my test.

Part 2 https://imgur.com/a/zryEdGA

Side by side comparison https://www.youtube.com/watch?v=g9pV6XI0ADI

πŸ‘︎ 160
πŸ’¬︎
πŸ‘€︎ u/Enterprise24
πŸ“…︎ Jun 07 2019
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.