A list of puns related to "Saliency"
βPhenylpiracetam is known to increase operant behavior. In tests against a control, Sprague-Dawley rats given free access to less-preferred rat chow and trained to operate a lever repeatedly to obtain preferred rat chow performed additional work when given methylphenidate, d-amphetamine, and phenylpiracetam. Rats given 1 mg/kg amphetamine performed an average of 150% as much work and consumed 50% as much non-preferred rat chow than control rats; rats given 10 mg/kg Methylphenidate performed 170% as much work and consumed similarly; and rats given 100 mg/kg Phenylpiracetam performed an average of 375% as much work, and consumed little non-preferred rat chow.
Present data show that (R)-phenylpiracetam increases motivation, i.e., the work load, which animals are willing to perform to obtain more rewarding food. At the same time consumption of freely available normal food does not increase. Generally this indicates that (R)-phenylpiracetam increase motivation [...] The effect of (R)-phenylpiracetam is much stronger than that of methylphenidate and amphetamine.[11]β source: https://en.m.wikipedia.org/wiki/Phenylpiracetam under the βOperant Behaviorβ section.
For a repo on github I have to run tf 1.14. But now I want to visualize the saliency maps. I'm new to tf, and online I see people achieving this by using 2.0. Can someone help me?
While I was exploring IQ test and how can pictures with patterns to be completed and 3D objects to manipulate and solve determine intelligence, One idea came to my mind which is βsaliency β , letβs say that IQ test your ability to solve problems , this definition of intelligence is the simplest . To solve a problem Ψ Problem need to be important to the person preference or has contextual relevance .
Saliency and real intelligence
Saliency is a concept in neuroscience, It simply mean particular stimulus that grab your attention because this stimulus stand out between thousands of sensory stimuli from environment , It focus your limited attention and cognitive resources to that particular stimulus because itβs salient , Rewarding stimulus or danger stimulus , brain Saliency is what detect any stimuli in environment .
For example if you are in a jungle you see a lion , this lion is salient stimulus and your attention is focused to the lion instead of a trees , sky or whatever is there in that jungle . Why, Because lion is a very important and salient problem you need to solve or you will die . You need to use your cognitive,Attentional resources to solve the lion problem. You see , cognitive and attentional resources are limited and costly . Itβs amusing to assume the brain is not a problem solving computer that solve any thing what ever that thing is , It solve relevant salient problems .
Letβs test intelligence by a problem and letβs assume it is salient for every one βIQβ
The problem of the lion is salient to the person in the jungle, Not the person who setting in his comfy couch drinking coffee and writing his explanation of intelligence. Letβs apply interesting idea , IQ is a multiple choice questions of problems depending in how you solve it , It give you a points from borderline retardation 75-85 points to genius level above 145 points.
Why would IQ test be salient to every one , lion in the jungle is important for the person in the jungle its extremely salient and important problem for the person in real context Ψ his brain will use every possible attentional and cognitive circuits to solve this life or death problem VS the person in his home drinking his coffee and discussing intelligence.
Replace lion with IQ test , IS IQ test has the same saliency for every person, in the previous lion example I was discussing lion problem in the jungle drinking my coffee, It was not very relevant to me I did not use my cognitive resourc
... keep reading on reddit β‘Is there a dataset/resource that measures humans decisions on what words in the text are most important for a machine learning task? I'm interested in textual entailment, but any task would be useful!
The use-case here is building a better saliency map. There are a lot of methods out there (attention maps, simple gradients, smoothgrad, integrated gradients, etc...) and all of the papers show examples of a given map. However, none of them that I found have a dataset of human responses as a "gold-standard".
Hi to all ML Reddit folks!
I'm a Masters student currently working on a paper on explainability for LSTM models on medical data. Unfortunately, the field seems to be a bit messy so far and I do not have any experts to talk to at our university. Hence, I'm asking you.
We developed an LSTM model to predict critical phases in an ICU. We want a saliency of this model to 1. debug the model/tell a clinician why a prediction was made 2. potentially find out more about the underlying pathologies. We decided on using gradient-based approaches, such as SmoothGrad and Integrated gradients to derive a saliency explanation per patient (we average the predictions of all time-steps and take the gradient towards all features in all time-steps).
These saliencies can be very nicely explored - but how do we know they are correct? While there are theoretical approaches to assess a saliency method in general, such as Sensitivity-n and (In)fidelity, these do not answer our specific use-case. The only proper evaluation method I could find so far is the Most Relevant Features (MoRF) AUC and the Least Relevant features (LeRF) AUC and their combination: the area between perturbation curves (ABPC) from Samek et al 2016 (https://arxiv.org/pdf/1509.06321.pdf).
These approaches are based on the idea that there is a ranking of features induced by a saliency method. Then, MoRF removes (not literally removing, the features are either set to 0, to the mean/median of that feature or are perturbed by noise) iteratively more and more of the most important features, which should induce a strong change in the logit output of the model. LeRF, on the other hand, removes more and more of the least important features, which, for a good saliency method, should induce only a small change in the model output. the ABPC is the difference between the MoRF and the LeRF curve - the greater its AUC, the better.
While I think this approach works, one thing annoys me: it only looks at the ranking of the features and ignores the entropy. One feature could inherit all the importance and the model deems all other features unimportant, and yet, in the evaluation of the saliency will ignore this and just regard that this feature was ranked first. But any practitioner that uses saliency would only look at the first feature.
To resolve this I thought of removing the features that have in sum the top X percent of importance (or bottom X percent for LeRF). So, i
I would like to ask if there is a way to acquire network saliency maps for applications like object detection (aside from classification)? For example if we take the standard gradient-based saliency maps, how can we do this if our output is an image map and each pixel is classified rather than having a single predicted value?
Thank you very much.
Hello people,
my coursemate and I are conducting research on the interpretability of chess engines. We tried to extend one system that can identify important pieces (SARFA Saliency) by taking into consideration the important squares as well, given a puzzle situation. In order to validate our result, we need chess players to take a simple test consisting of 15 puzzles.
This is an example of the puzzles you can find in the test
If you are interested in taking the test, please fill in the Google form below: you will receive an email with the link to the test.
Please be fair and take it as a simple challenge :)
The results are shown at the end of the test.
https://docs.google.com/.../1FAIpQLScfr2valt1.../viewform...
Thank you in advance for helping us with our research!
(Here you can find an example of the test: http://saliencyimproved.altervista.org/test/1/ - please don't open it if you wish to take part in the test)
I recently posted asking for recommendations for the most exciting ASDS landing to show a special patient I'm working with, and I thought I would share an update.
For those that missed my first post, one of my patients was the lead engineer of a part of Mercury through Apollo (CSM and LEM). They survived a stroke a year ago, and has had a terrible time recovering. They came to my outpatient orthopedic clinic a little over a month ago with almost no standing tolerance, much less tolerance for walking for any distance (<50ft at best, with a lot of help). In that time we've worked on improving tolerance for activities of daily living, and one of the big ones was standing. Before our last treatment session, the longest the patient had ever stood since the stroke was <30 seconds. When I showed the patient the launch of Falcon Heavy with the rule that the video only played when they were standing, they were able to stand for almost 10 minutes, the longest single stand being over two minutes. That's when I promised I'd show them an ASDS landing next time, and asked you lovely people for a recommendation for the most exciting landing.
A few of you are true sadists and recommended I show the ASDS landing attempts from T-30 in order until CRS 8 never telling the patient which one was the actual first success (if you ever want a change of pace, PT will be a growing field our entire lifetimes, and a few of you are really cut out for it!).
My patient came ready for the challenge and was able to stand for more than half of a 45 minute session, with the longest single stand lasting four minutes! We were able to get to the countdown of CRS 8 before they finally threw in the towel for the day. Next visit the patient will get to see the fruits of their labor with a successful landing and a blooper reel to boot. I appreciate all of you for your encouragement to torture push my patient to succeed.
Thanks /r/SpaceX, you're the best!
Hey everyone. A few weeks ago Twitter's saliency cropping model was under heavy scrutiny because of incidents of racial bias. After a few interesting conversations, we put together a demo of a state-of-the-art saliency model so that the tech can be better scrutinized. Our demo lets you see the saliency map along with the eventual crop (the differences in saliency across race are sometimes really wide), as well as manipulate the input image by adding text (text really throws off the model), flipping, etc.
You can try it out here.
Here's an example:
All sources (model used, associated paper, etc) are on the blog post.
Please share any interesting examples you try :)
paper: https://arxiv.org/abs/2009.06962
Code: https://github.com/snu-mllab/PuzzleMix
Abstract:
>While deep neural networks achieve great performance on fitting the training distribution, the learned networks are prone to overfitting and are susceptible to adversarial attacks. In this regard, a number of mixup based augmentation methods have been recently proposed. However, these approaches mainly focus on creating previously unseen virtual examples and can sometimes provide misleading supervisory signal to the network. To this end, we propose Puzzle Mix, a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples. This leads to an interesting optimization problem alternating between the multi-label objective for optimal mixing mask and saliency discounted optimal transport objective. Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results compared to other mixup methods on CIFAR-100, Tiny-ImageNet, and ImageNet datasets.
First, we propose a single framework under which backpropagation-based saliency methods can be unified. We show that saliency maps can be interpreted as a measure of how much the pixels contribute to the gradients of the weights. To do that, we use the fact that the gradient of spatially shared weights can be written as a sum over spatial locations.
Second, we combine saliency maps at different layers to test the ability of saliency methods to extract complementary information at different network levels (e.g.~trading off spatial resolution and distinctiveness).
Paper: https://arxiv.org/abs/2004.02866
Code: https://github.com/srebuffi/revisiting_saliency
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.