A list of puns related to "Static Timing Analysis"
So, going through the large process of applying the Visual Studio static analysis tool to some large code bases, I figured I'd just throw out some thoughts about the good, bad, and ugly that come out of that. Maybe it'll be helpful to someone... These are in no particular order, and of course I could be missing something about some of these that would make them less onerous than I've been assuming they are.
And any of them can be avoided by just turning off those checks, but that sort of defeats the purpose. The goal, hopefully, is to get the benefits of the analysis without having to write ridiculous code just to make the analyzer happy, which is often difficult. Every suppression you add means that some change later on that is a legit error will just get ignored because you've suppressed that warning in that bit of code.
Objects passed to callbacks
The analysis tool has to see every pointer being set or it warns about it. If you do callbacks, for predicates or for-each type things, even if the actual data is pointers, pass them as references. Else, the analyzer will whine about every such loop and want to you to do a null test even though it's never going to be null.
Raw Arrays
Though I'm not remotely a purist, and my code goes from quite low level up to very high so I have plenty of legit places down very low where I have to use raw arrays, they obviously come at a cost for static analysis purposes, since the analyzer in VS seems to have very limited ability to reason about array indices and you typically will have to index them using magic templates that suppress the warnings (and possibly range check them), making your code much less readable in the process.
Indexing Operators in General
The analyzer assumes all [] type accesses are unchecked unless it actually sees otherwise, and of course (for some bizarre reason) even the ones in the std array and vector are actually unchecked. So every single one of them or any raw array indexing will be complained about. If you use the STL stuff you can use the .at() method to avoid these, but of course that's not nearly as readable. For me, my index operators are range checked and the analyzer can see that code since it's templatized, so I think I've gotten around this mostly.
Some people seem to think indexed loops are evil of course, but that's silly. There are many places where you inherently need the index and having to calculate it or separately maintain it is just crazy (and more moving parts
... keep reading on reddit β‘I am running an FPGA Network Interface Card in production.
My design has a very tight timing constraint. Usually, I have to full build 8-10 times (build/synthesis/fitting) to get a bitstream which passes multi-corner timing analysis. The slow 900mv 85'c model always fails with -ve slack in Quartus Timing Analyser. By multiple compilations, I have 10-20% chance to get a good bitstream.
Last month I have to hotfix a production bug which incurs a slight change in RTL. I quickly fixed that but I have no time to build a good bitstream. It takes 2.5 hours to build once. I tried building 3-4 times a day but still failed to meet the timing constraints. In the end, I deployed the bitstream, with the worst path -0.09 slack in slow 85C model, to production as a hotfix.
The hotfix runs in production for two days without any issues. Lastly, I deployed a good bitstream to production on 3rd day.
So what's the risk (and more importantly how to mitigate the risk) of using a design which fails in multi-corner timing analysis?
Note: I won't try to censor anyone, but I am gonna request that this be kept a serious discussion about hockey. Try to remove your feelings about Benning and about other posters, because there's a lot to be talked about without getting into the petty us vs. them stuff. Also, before you start slinging mud at me for my bias, know that I tried to keep my bias from interfering with my findings; I started with a question, devised a way to find the answer, and found it. Feel free to disagree or even point out where I went wrong, but any accusation of me being a "Benning hater" or "pushing an agenda" will be taken with zero seriousness.
---
I wanted to see whether I was overreacting to the timing of that 1st round pick trade, so I did a little research about the successful teams of the last decade. I wanted to see see how the most perennially successful franchises handled trading 1st round picks during the timeline of their developing cores.
In comparison to how the most successful franchises in this league built up their rosters, was Vancouver too hasty in trading away a 1st round pick for immediate roster help?
First, what defines a successful team? This website breaks down the "winningest" teams in the last 10 years through regular season wins. The top 3 teams are Pittsburgh, Washington, and St. Louis. I decided to analyze these teams because of their regular season success, which leads to post-season appearances. I also decided to analyze Chicago (#6) and LA (#11) because of their obvious Cup-winning ways.
Second, to examine the development of a team's core, I first had to identify each core's "cornerstone" pieces - those players who led the team to their success - and see when they were each drafted. These pieces were: Crosby, Malkin, Fleury; Ovechkin, Backstrom, Kuznetzov; Backes, Pietrangelo, Tarasenko, Binnington, Parayko; Toews, Kane, Seabrook, Keith; Kopitar, Doughty, Quick.
Third, I took to www.nhltradetracker.com to find 1st round trades that the teams made after the cornerstone pieces had all been drafted. Once I saw that a team had traded away a 1st rounder, I looked to see what their roster construction was like. Note that I include a lot of names on the roster construction outside of these cornerstone pieces. This i
... keep reading on reddit β‘Hey everyone, I've been tasked with finding out what kinds of tooling different programming communities use. I have found a similar thread dated 2017 here, but obviously there've been changes over the last three years. As such, people advised the OP to get any static analyzers they can get their hands on and integrate them into their CI β yet no many specific namedrops.
So the question is when it comes to CI pipeline in your C++ projects, which checkers do you use? I guess clang-tidy is pretty much a given, but anything else specifically? I know I could just go and google a list of best static analyzers for C++, but what I'm interested in is what tools people actually use for their projects.
Also, why are you using specifically those tools and not the others? Is there anything missing, some needs that weren't covered by your tools just yet? Are there any things you have to integrate over and over again for many projects in order to keep your C++ codebase neat and less error-prone?
This is part 1 of the test. It take 3-4 days just to analyse frametimes in just 4 games. Part 2 coming soon.
Some of you guys probably know how to overclock RAM and how to adjust primary timings.But how about optimizing sub-timings like secondary and tertiary timings for gaming ? Let's find out is it worth it...
Test system
i7-8700K @ 5Ghz core and 4.8Ghz uncore
ASRock Z370 Taichi P4.00
2x8GBDDR4-3500 16-18-18-36-2T (dual ranks double side Hynix AFR)
EVGA GTX 1080 Ti @ 2126 core / 12474 mem
Corsair HX 750W
NZXT H440 White
Custom Water Cooling
Windows 10 64 bit 1607
Nvidia 430.64
Record by ShadowPlay
Wait. WTF at the end of each games ? That is the main topic of today. In-depth frametimes analysis.
I feels that this test is deserve for a ton of effort of frametimes analysis.
Most of you guys are probably know what is AVG FPS , 1% Low and 0.1% Low.
The next graph is frametimes graph. It show us about smoothness.
The next one is frametimes by percentile graph. Show about frametimes from average (50th percentile) to the most important 99th (1% Low) and 99.9th percentile (0.1% Low).
Pay attention that from 50-95 I divide each scale to 5 while 95-99.9 each scale is just 1 because that areas are the most important metric for smoothness.
Next is Time spent beyond ...ms , it tell us about how much times that the frame render exceed certain milliseconds.
You guys are probably familiar with those numbers.
50ms mean 20 FPS (1000/20=50)
33.3ms mean 30 FPS (1000/30=33.33)
16.67ms mean 60 FPS (1000/60=16.67)
10ms mean 100 FPS (1000/100=10)
8.33ms mean 120 FPS (1000/120=8.33)
6.94ms mean 144 FPS (1000/144=6.94)
Why is this graph important ? It can tell us about smoothness in another dimension.
If you want solid 60 FPS "zero" is the best number that should follow 50ms , 33.33ms and 16.67ms graph.It mean that no frame take time to render more than 16.67ms.
I really hope you enjoyed my test.
If you want to watch side by side comparison of this test please visit
part 1 https://www.youtube.com/watch?v=TzkcT1mjLpw
part 2 https://www.youtube.com/watch?v=g9pV6XI0ADI
https://preview.redd.it/1encf8i0tx231.png?width=1208&format=png&auto=webp&s=2e8aa086bc31fda3f67aeec6ec610b49491630b2
https://preview.redd.it/02w2wa12tx231.png?width=1213&format=png&auto=webp&s=1af669effa9c055eb44030a98a48d62d346f1664
https://preview.redd
... keep reading on reddit β‘This book was recently published, and is presented as "A self-contained introduction to abstract interpretation-based static analysis, an essential resource for students, developers, and users."
Did anyone here have the chance to read it? I did not find any evaluation of it so far, and I'm eager for state of the art, practical books on this matter.
It is a very typical bug,
multimap<int,int> test;
test.emplace(1, 3);
test.emplace(3, 3);
test.emplace(3, 4);
auto range = test.equal_range(3);
for (auto i = range.first; i != range.second; ++i) {
if (i->second == some_value) test.erase(i);
}
first of all, the erase(i) part will invalidate i and then ++i is UB. But I tried a couple of tools, like CppCheck, Clang tool, valgrind. None of them report the bug. is there any suggestions? (or maybe I did not use them correctly?)
EDIT : Likely solved
My case doesn't look like a regular case I've been able to find online. There's a definite case seam that I could base timing off, but does anyone with experience know if I should be going off that line above the notch in this pic? The case seam doesn't line up with that line at all. You can see my seam to the left a bit.
In the following code:
def taste(color):
if color == "orange":
flavor = "tangy"
elif color == "pink":
flavor = "sweet"
print("The flavor is", flavor)
This code will fail if the color entered is, say, 'blue', with the following error:
>>> taste('blue')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in taste
UnboundLocalError: local variable 'flavor' referenced before assignment
What I am trying to determine if there are any code analysis tools that will recognize the potential for UnboundLocalError before runtime. It appears that neither pylint nor pyflakes will catch this error. There must be some code analysis tool that does this. Ideally one that is not focused on types (i.e. does not required type hints/annotation in the source code).
Hello Fellow members, I am using RStudio-1.2.5033 and R version 3.3.2 (2016-10-31). I am looking for a tool or a package that does static analysis of ".R" files . I was wondering whether there is a package like there is pylint, pyflakes etc. for python.
I did my research on this and found a Package called ' CodeDepends' but that doesn't support version-3.3.2 and found another one called 'codetools'. Right now I am looking at the 'codetools' package and seeing how that works with a ".r" file.
I am a recent mechanical engineer graduate and I am highly interested in CAD design and FEA analysis, so I would like to get some insights on Finite Element Analysis from experienced industry people)
I understand that there are many factors that influence the results of FEA simulation ( in this case i am talking about Linear Static Stress Simulation). Things like mesh size, singularities and other can make a remarkable impact on results but assuming no major errors have been made, then how close to the results a same component in real life would act if it had same geometry. Is there industry standards on how to judge simulation accuracy or a prototyping is needed to completely varify the validity of simulation?
The main concerns for me are how much the "imperfections" of materials and manufacturing process have on performance on mechanical parts. Also, you can cast metal parts and also forge them which will result in relatively different component performance so can you be certain to a degree when looking at simulation results?
( to run simulations I run HyperWork software)
So I posted a little about the basic jumpshot analysis I've done for myself and I wanted to refine that and expand a little. Unfortunately I'm busy and this is tedious so I didn't get as far in as I wanted. Also, I do not have Ray Allen's jumper because I don't have a backcourt shooter (doh). But I went with the second choice of Gary Payton to compare to my own jumper.
Methods
Results
Well, my boy Luke Kennard's jumpshot that I use eats a big fat bag of dog dicks.
Gary Payton's Jumper greens from 465 to 505 ms with a 40ms window.
Luke Kennard's jumper greens from 495 to 525 ms with a 30 ms window.
So not only is Luke's shot nearly 10% slower it also gives up 10ms of green window. So I'm switching I guess. I tested Luke's down to 1ms on the late end and it broke at 528ms or 526ms I can't remember (didn't do any writing then).
I am sorry I am not deleting one of my guys to make a backcourt shooter to get Ray's, but you can compare it to Gary Payton and see. It looks much faster from what I've seen online.
Hey everyone I just had a quick question. So cards like dawnwalker that require a 5+ power creature to be played to get its effect - when does it check for its condition?
For example if I play ravenous thornbeast and sacrifice a creature will this trigger dawnwalker?
Another case, would be if I play a 4 power creature with xenan oblisk does that trigger dawnwalker?
This is part 1 of the test. It take 3-4 days just to analyse frametimes in just 4 games.
Some of you guys probably know how to overclock RAM and how to adjust primary timings.But how about optimizing sub-timings like secondary and tertiary timings for gaming ? Let's find out is it worth it...
Test system
i7-8700K @ 5Ghz core and 4.8Ghz uncore
ASRock Z370 Taichi P4.00
2x8GBDDR4-3500 16-18-18-36-2T (dual ranks double side Hynix AFR)
EVGA GTX 1080 Ti @ 2126 core / 12474 mem
Corsair HX 750W
NZXT H440 White
Custom Water Cooling
Windows 10 64 bit 1607
Nvidia 430.64
Record by ShadowPlay
if you want to watch side by side comparison please visit
https://www.youtube.com/watch?v=TzkcT1mjLpw
Wait. WTF at the end of each games ?
I feels that this test is deserve for a ton of effort of frametimes analysis.
Most of you guys are probably know what is AVG FPS , 1% Low and 0.1% Low.
The next graph is frametimes graph. It show us about smoothness.
The next one is frametimes by percentile graph. Show about frametimes from average (50th percentile) to the most important 99th (1% Low) and 99.9th percentile (0.1% Low).
Pay attention that from 50-95 I divide each scale to 5 while 95-99.9 each scale is just 1 because that areas are the most important metric for smoothness.
Next is Time spent beyond ...ms , it tell us about how much times that the frame render exceed certain milliseconds.
You guys are probably familiar with those numbers.
50ms mean 20 FPS (1000/20=50)
33.3ms mean 30 FPS (1000/30=33.33)
16.67ms mean 60 FPS (1000/60=16.67)
10ms mean 100 FPS (1000/100=10)
8.33ms mean 120 FPS (1000/120=8.33)
6.94ms mean 144 FPS (1000/144=6.94)
Why is this graph important ? It can tell us about smoothness in another dimension.
If you want solid 60 FPS "zero" is the best number that should follow 50ms , 33.33ms and 16.67ms graph.It mean that no frame take time to render more than 16.67ms.
I really hope you enjoyed my test.
Part 2 https://imgur.com/a/zryEdGA
Side by side comparison https://www.youtube.com/watch?v=g9pV6XI0ADI
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.