A list of puns related to "Pseudorandomness"
RANDOMIZE TIMER
SCREEN 0
FOR y = 1 TO 23
FOR x = 1 TO 80
COLOR INT(RND * 14) + 1
LOCATE y, x
PRINT "Z";
NEXT
NEXT
DO
x = INT(RND * 80) + 1
y = INT(RND * 23) + 1
COLOR SCREEN(y, x, 1)
LOCATE y, x
b = INT(RND * 20)
SELECT CASE b
CASE 1 TO 5
ch = 218 + b
CASE 6 TO 8
ch = 170 + b
CASE 11
COLOR INT(RND * 14) + 1
PRINT " "
CASE 12 TO 15
ch = 164 + b
CASE 16
PRINT CHR$(254)
CASE 17
t = TIMER
WHILE t = TIMER
t = TIMER
WEND
CASE 18
PRINT CHR$(SCREEN(yy, xx, 0))
CASE ELSE
END SELECT
PRINT CHR$(ch);
PALETTE INT(RND * 14) + 1, INT(RND * 63)
xx = x
yy = y
LOOP UNTIL INKEY$ <> ""
Of course you might find a sha256 collision randomly on the first try, but I mean can it be done not by luck but on average.
I dont mean to make this specificly about sha256 or existing hash or unitary algorithms. I'm more interested in the core theory of universal compute ops such as toffoli, nand, etc.
Hello! I have a semi-theoretical question about the design and complexity of encryption schemes.
So let's say I wanted to build a system that used public file-storage to store private files. I don't care if it's pastebins, or blockchains, or IPFS, or a wordpress site, whatever. I put my private files in public somewhere for replication. Obviously the thing I would be worried about in this situation is that people not be able to read my private files. I would be hesitant to do this right now, because encryption schemes break over time, and key-length recommendations change, etc, and now my once-private files can become public over time.
But if we imagine for a second that we had 2^(64) random bytes somewhere, wouldn't it be sufficient to just take my file I'm trying to encrypt, pick a random number from 0 to 2^(64) as my starting point, and then go through byte by byte xor-ing each byte of my data with the next byte of my random string. Like a book-cipher, but the book is random.
I would anticipate, given the few things I know about OTPs, that this should be practically unbreakable, because there isn't really a scheme to break. Any valid interpretation of the data could be valid, and I assume an attacker would have to brute-force the keyspace (where the starting point is my key) to figure out the data.
Storing 2^(64) bytes is obviously a little tricky, so something more practical would be using a pseudorandom stream with a 64-bit seed, to emulate the same concept. And honestly 64-bits isn't huge, how about 256 or 1024, or 2048. That's still substantially smaller than the size of most files someone would want to store.
So, now to my questions. Schemes such as AES are substantially more complicated that this. I assume that's for a reason, because a lot of interested and knowledgeable people spent a lot of time on them. I assume it's not just for fun, and there's some really important reason the scheme I made up here is much much worse. Is there any easy way to point out that flaw to me?
In this scheme the cipher-text would be public, but I would need to manage the key separately. And each file would need its own key, which means as I store more information the number of things I need to remember in order to retrieve this data grows forever. Does a lot of the complexity in real schemes come from the key generation algorithm that allows a human to type a password, and turn that short bit of low-entropy information into a keystream?
Is the pseudorandom
... keep reading on reddit β‘I made a video explaining randomness in a short (and as engaging way as possible) to a layman's audience. https://www.youtube.com/watch?v=70V_WP9Vusw&ab_channel=DrawCuriosity
Two top things I'm interested in knowing:
Did you enjoy it? Was the pace good (if not, too fast/too slow?) Opinions on presentation style? Did you understand the explanation of Pseudorandomness?
Audio - good or bad, how would you improve it?
Self-review: I'm making changes in my production in terms of camera, lighting and set up so I won't make too many comments there. I've noticed on Windows (vs. Macs) it also appears a lot darker. I thought my audio was OK, but in a previous video people mentioned it sounded slightly echoey and a lapel mic could be better.
I'm interested in knowing how you feel about the presentation. I personally like the pace of it, and most of my closer friends think it's right, but some people thought it was too much and others thought I could give it a bit more enthusiasm. Curious to know where you stand.
I'm most concerned about whether it is a topic people would be interested in watching, as I'm building an educational channel.
Reviews:
My instructor wants us to find the pattern in these simulated roulette results that came from a generator. I just can't think of a way to analyze the data that will help me find the pattern.
Here is the data:
15
6
30
7
32
30
3
26
33
8
28
27
16
7
7
11
25
11
10
25
30
12
21
8
23
33
23
29
33
10
33
00
25
0
8
31
15
25
18
9
29
00
33
9
26
27
28
23
5
9
13
33
27
33
1
20
10
16
30
8
22
0
28
15
32
15
26
26
22
24
14
3
34
32
35
14
33
13
25
8
35
28
15
29
33
22
13
24
26
16
32
13
3
32
14
31
5
36
8
9
30
7
7
21
22
31
22
28
16
27
8
8
35
16
0
24
16
25
7
15
35
25
5
24
8
33
36
25
0
31
3
2
23
17
11
18
15
0
11
14
14
19
14
16
28
27
35
25
17
6
28
27
7
22
32
29
9
24
19
22
33
14
16
36
15
27
8
14
10
16
28
8
22
27
25
36
20
29
30
19
6
32
12
8
4
19
25
1
15
17
34
17
21
23
4
21
6
29
33
23
30
1
14
10
27
27
27
15
11
2
10
2
00
23
00
5
19
13
16
19
18
11
13
24
34
33
19
36
10
14
24
13
9
35
29
8
25
1
23
4
20
28
18
0
33
18
17
22
36
00
26
15
20
36
36
27
21
20
00
17
9
14
14
14
18
7
12
29
13
30
1
28
14
13
30
7
31
9
8
4
15
3
28
19
24
24
27
25
11
32
10
19
25
17
20
9
29
11
26
33
18
29
11
11
24
4
32
20
33
15
20
1
12
36
25
28
1
22
36
10
28
35
6
28
1
24
5
15
25
23
20
14
13
0
28
5
30
20
24
25
Using fairly simple math and statistics.
Would 1000 trials, 1000000 trials or more be needed to see significant results. Would looking at relative requencies of each number work or subsequences of a certain length?
Do the methods of number generation vary greatly?
What makes a pseudorandom number generator have a long sequence, and how do we figure out how to make a PRNG with a long sequence? Is there some mathematical way to figure out the best program for an incredibly long and random sequence? Or is it just random guessing using intuition?
I was reading about pseudorandom number generation, and I was wondering what Java uses in it's Math.random() and the Random utility class to generate random numbers. What I am kinda asking is, does Java use this linear congruential generator, or something else?
I know that picking or banning of maps is not coming anytime soon since there are only 4 maps at the moment. And i know that a lot of players strongly prefer some maps and hate others (i'm talking about you split)
And this, what seems to be pure random system creates big problem, playing same map over and over multiple times.
For example i played split 13 times out of last 20 competitive games, ones 5 times in a row. We all know that maps in valorant have a lot of problems and frustrating areas. Having chance to play same map 5 or even 20 times in a row make this problem even bigger.
Because if its pure random system, everytime you queue every map has 25% chance of being chosen, that means you can theoretically play only one map over and over and there's NOTHING you can do about it. And its not really fun when you want to have fun competitive evening with friends and you play one map five times in a row.
Instead i propose simple pseudorandom system.
After you play map, your chance of getting that same map drops in half, so its 12.5%, if you get the same map again , its 6.25% and so on. Or it can drop even more or have assigned different parameters after playing, based on playtime etc..
This way it will feel more random and be more fun for everyone.
And btw: Apple had this same problem when itunes came out, their shuffle feature used true random system and people were complaining that it picked same songs over and over because there was a chance for it to happen. After complaints, they made it pseudorandom so when song was played, its chance of playing again soon dropped radically.
EDIT:
Lot of people are saying that this is not possible or it is problem because there are 10 players and only four maps and everyone played different map last game.
This is not a problem at all. Hereβs why:
System would just match together players that played same maps last game and pick one of other three for that game.
That would be basically like you manually picking three out of four maps, but system will pick that three for you. This would result in slightly longer queue times. For example in cago you can solo queue just one map and you are just fine, and in this pseudorandom system you are queueing for three maps.
There is only ONE situation where this doesnβt work. If you play in four stack and each of you played different map last game, which will result in random map. But that is rare situation.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.