How to use GPU in an embarrassingly parallel calculation?

I have a calculation which takes about one month, when running in a ca. 100 CPUs server, using foreach. The calculation is embarrassingly parallel. There is no machine learning at all to be used, the calculation consists of a series of traditional optimizations.

The server I do not think it has any relevant GPU.

I have read that GPUs may accelerate massively some tasks. But if I understand correctly, GPU usage in R code is not as simple as the CPU parallel one.

Where can I learn what to do in order to use GPU efficiently in my code, using R? If R is not able to do that, what should I do, to Python or C++, in CUDA? (note: I use Rcpp for intensive loops).

👍︎ 6
💬︎
👤︎ u/Empodering
📅︎ Mar 16 2021
🚨︎ report
Embarrassingly Parallel - Bioinformatics tutorial (Counting kmers in Python) youtube.com/watch?v=7Kue7…
👍︎ 11
💬︎
👤︎ u/Singular23
📅︎ Jun 15 2020
🚨︎ report
Database Support 13: Embarrassingly Parallel Operations

Last time on Database Support: Stop the ride, I wanna get off!


A few months after my last tale, everything was looking up for my team. We'd been assigned several new hires and trained them up quickly but hadn't had them yanked away from us as per usual, everyone on the team was contributing well to all of our projects (even good ol' Superfluous was relatively competent by this point and no longer deserved that nickname), and we were finally making progress with our massive backlog of work.

Naturally, this wasn't to last.

The department had a big release coming up, the biggest one in years, and management decided that having eight people on our team was simply too many. I don't remember the exact reasons they gave as to why having exactly four pairs of people to deal with more than four different tracks of works was so objectionable, but I do recall that said reasons sounded pretty stupid and flimsy at the time and basically boiled down to "Big teams are bad, mmkay?".

One morning we were called into a meeting with a few of the higher-ups and given the bad news:

> $Manager1: We've decided that certain tracks of work in your backlog are very high priority, and are worth splitting out a separate team to work on them. DB_Dev, $Coworker1, $Coworker2, and $Coworker3 will be staying on this team with the current tracks of work, and RelEng, Gilderoy, $Coworker4, and $Coworker5 will be forming a new Infrastructure team.
> $Coworker1: Where will the Infrastructure team be located? There's not much free space on this floor.
> $Manager1: They're going to work next to the Utilities team. The same space you're in now, just rearranged a bit.
> $Coworker2: So, not everyone has the context on all the stuff the team works on, and even DB_Dev and RelEng don't have a handle on everything. What should we do if we need someone on the other team to help us? It seems like there would be lots of explaining back and forth going on.
> $Manager1: We'll be giving both teams access to each other's task tracker, so if someone asks you a question about the other team's work you can quickly get up to speed on it.
> $Manager2: But we're confident that each team w

... keep reading on reddit ➡

👍︎ 587
💬︎
👤︎ u/db_dev
📅︎ May 02 2018
🚨︎ report
The rise of embarrassingly parallel serverless compute davidwells.io/blog/rise-o…
👍︎ 6
💬︎
👤︎ u/qznc_bot2
📅︎ Jun 14 2020
🚨︎ report
Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, SETI@Home, Folding@Home, or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or SETI@Home, Folding@Home, or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.

The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).

No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.

**Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successfu

... keep reading on reddit ➡

👍︎ 91
💬︎
👤︎ u/ydtm
📅︎ Sep 25 2016
🚨︎ report
What is the ubiquitous way to do “embarrassingly parallel” computing on AWS with a large amount of data?

I’ve looked into a number of different AWS services that can massively distribute parts of a large data set to many machines for computation. So far I have looked into Lambdas, Elastic MapReduce, and just managing EC2 instanced on my own.

Managing my own instances is not appealing as I do not want to incur the development costs of handling faulty instances, retrying, etc. Lambdas initially appeared promising but the size of my problem is too large, so that’s a no-go. EMR looks like the way to go, but I’ve heard mixed things about Hadoop MapReduce (main criticisms being its slow and hard to program).

I want to understand what the industry standard service is for my use case. For context, I have a huge amount of data over a twenty year period, and I want to do independent statistical analysis (regression) on every single day using R. Data size within each day is large, but could maybe fit into memory on large EC2 instances. Think of a data frame with 12000 variables and 120000 observations.

Any direction from those in the know would be greatly appreciated.

👍︎ 3
💬︎
👤︎ u/201jon
📅︎ May 11 2019
🚨︎ report
TIL that in computing the phase "Embarrassingly parallel" is a legitimate technical phrase to describe computing tasks which can be separated into multiple concurrent tasks with little to no communication between them. en.wikipedia.org/wiki/Emb…
👍︎ 19
💬︎
👤︎ u/throgmorto
📅︎ Jul 17 2019
🚨︎ report
Almost every program is embarrassingly parallel in Erlang. news.ycombinator.com/item…
👍︎ 31
💬︎
👤︎ u/one_zer
📅︎ Jan 24 2017
🚨︎ report
The Opposite of the Embarrassingly Parallel Problem blog.sei.cmu.edu/post.cfm…
👍︎ 77
💬︎
👤︎ u/hbrayer
📅︎ Feb 09 2015
🚨︎ report
[Python/MPI] Embarrassingly parallel problem

Apologies if this question was asked before - I think it's a fairly simple one but after a bit of Googling I didn't find exactly what I needed.

I have a python script that I need to run in parallel, with no communication required between the threads i.e. embarrassingly parallel. In fact, all I need is just to run the exact same Python script N times using N CPU cores. I wrote a bash script that does this, and it works fine. However, I would like to achieve the same results where I only need to spawn one process instead of N processes. What's the easiest/best way to do this, preferably involving the least/the most common additional external libraries?

Note: I have used MPI in Fortran, but not with Python before.

Edit: so with GNU Parallel, one can do this, although I haven't yet tried if it also works on the cluster. If I check the processes with the top command in the terminal I still see N processes, but it might be that it's fine since it's only the GNU Parallel program that calls them. I will try when I am able. The command, run from the directory where myshell.sh is present:

/local_installation_directory/parallel --jobs N -n0 ./myshell.sh ::: {1..N}

will give what I need, where myshell.sh is a bash script that just calls my Python script (but could be anything). The command -n0 means that there is no input; the sequence {1..N} is just a series of numbers that tells GNU Parallel to run the script N times. --jobs # is an optional argument that fixes the number of simultaneous runs, if not given it will use the number of CPUs that are available.

👍︎ 3
💬︎
👤︎ u/Hapankaali
📅︎ Aug 11 2016
🚨︎ report
Tackling embarrassingly parallel problems in PHP

How do you approach problems that are so obviously parallel in PHP?
is there a way to implement the Producer-consumer pattern?
In this example I am performing an operation to every file in a directory.

  • retrieve the full path and filename
  • get the sha1 checksum of the file
  • determine where to move the file to based on it's name
  • insert the filename and checksum into the database
  • move the file to it's new location

Don't limit discussion to this example alone, I'd like to have a broad discussion about parallelism in PHP.

$it = new FilesystemIterator($import_dir);
foreach ($it as $fileinfo) {
    if ($fileinfo->isFile() && (!strstr($fileinfo, '._'))) {
       $filepath = $fileinfo->getPathname();
       $filename = $fileinfo->getFileName()
       $checksum = sha1_file($filepath);
       $dest_path = MyClass::GetDest($filename);
       $db->execute(array($filename, $checksum));
        rename ($filepath, $dest_path);
    }
}
👍︎ 10
💬︎
👤︎ u/andyzweb
📅︎ Mar 16 2012
🚨︎ report
Concurrent Version of Embarrassingly Parallel Algorithms from <algorithm>

Hey everyone, I've been watching this subreddit for a while but this is my first post.

A little while ago I was playing around with the new concurrency stuff in the C++11 standard and wrote a few algorithms taking an iterator range, breaking it apart into n sub-ranges, where n is the number of threads available on the system and running the algorithm over each of these ranges (nothing new here).

This worked nicely but unfortunately, it cluttered the code a little so I decided to abstract all of the concurrency work away so one just has to write what algorithm should be applied to the sub-ranges and how to combine these to get the final result. For example, for std::count, you would call std::count on the sub-ranges then std::accumulate on the results of the pieces, ie. map and reduce.

After this, I decided to implement a portions of &lt;algorithm&gt; and it turned out to be quite straightforward to accomplish :) Anyways, the code is here, let me know what you think.

👍︎ 17
💬︎
👤︎ u/mrenaud92
📅︎ Dec 18 2012
🚨︎ report
Julia and Sun Grid Engine for embarrassingly parallel jobs

As a matlab-er doing computational biology, I have been extremely impressed with what I have seen from julia, and I am looking to possibly implement a large bioinformatics package in the language.

I have a question before diving in though: how well does julia play with a Sun Grid Engine (SGE) computing cluster? My task is embarrassingly parallel, where I basically need to save the output of a function for a wide range of inputs (that are independent from one another). The only HPC cluster I have access to is running SGE and I do not have administrative rights to this cluster, and I cannot expect much from the sys admins in the way of customizing things for me or giving me much access to the internals of the system. So I really need "off-the-shelf" SGE support in Julia if I am going to make things work nicely.

The documents here:

http://docs.julialang.org/en/latest/manual/parallel-computing/

do not discuss working in an SGE environment, so I was hoping someone could help me figure out how this would work in practice for me.

So let's say I want to do this:

x=zeros(10,1)
for index=1:10
   x[index]=my_function(index)
end
@save "/home/myfile.jld" x

where the loop is executed independently on 10 worker cores. How do I get julia to do the necessary "qsub" commands to allocate these resources and carry out this task? Has anyone done something like this?

Edit: Posting some of my current understanding for posterity in case anyone finds this when searching. This is untested code that might accomplish the above task (feel free to comment if I am mistaken here):

using ClusterManagers # assumes you have done Pkg.add("ClusterManagers")
addprocs_sge(10)       # open 10 jobs on default queue

# us pmap to asyncronously have workers compute my_function for inputs 1,...,10
x = pmap(my_function,1:10)

# close down all workers
for i in workers()
   rmprocs(i)
end
👍︎ 10
💬︎
👤︎ u/DrGar
📅︎ Jun 15 2015
🚨︎ report
Evolution Strategies: Almost Embarrassingly Parallel Optimization inference.vc/evolutionary…
👍︎ 9
💬︎
👤︎ u/filosoful
📅︎ Mar 31 2017
🚨︎ report
Embarrassingly parallel for loop

I have the following code to clean up some text and lemmatize it: https://gist.github.com/cigrainger/d3b24e002bb140d45c8c

The problem is that this is running at about 10-15 lines per second, and with 30 million lines, that would take about a month to process. Is there a faster way to do this? Can I use multiple processes or threads?

EDIT: Thanks for the help everyone. I found the answer on Stack Overflow: http://stackoverflow.com/questions/11631457/perform-a-for-loop-in-parallel-in-python-3-2. This is the updated code: https://gist.github.com/cigrainger/8856a05109fbf766604d

I was trying to do everything one line at a time (read, clean, write) because I was working on a normal desktop before (the file is about 30gb). I'm working on an Amazon EC2 instance with 32 cores and 244gb RAM right now, so I figured screw it. Parallel returns a list so was having problems with writing in the loop, so I just returned a list and then wrote the list to file. It's running now. On a test chunk, I was able to process and write 2800 lines in about 12 seconds, so even if that's as fast as it gets, I'm looking at ~35 hrs rather than 30+ days.

👍︎ 3
💬︎
📅︎ Jun 05 2014
🚨︎ report
Playing with Go: Embarrassingly Parallel Scripts collectiveidea.com/blog/a…
👍︎ 9
💬︎
👤︎ u/dgryski
📅︎ Dec 03 2012
🚨︎ report
Embarrassingly parallel en.wikipedia.org/wiki/Emb…
👍︎ 3
💬︎
📅︎ Jul 20 2014
🚨︎ report
Video: Embarrassingly parallel method tracks quantum mechanics in a photosynthetic molecule. Crossposted to /r/quantum krellinst.org/csgf/conf/2…
👍︎ 11
💬︎
👤︎ u/greenprius
📅︎ Oct 10 2014
🚨︎ report
Embarrassingly parallel - Wikipedia, the free encyclopedia en.wikipedia.org/wiki/Emb…
👍︎ 3
💬︎
👤︎ u/57n
📅︎ Jun 18 2014
🚨︎ report
A brief analysis on "The Divine Damsel of Devastation" (Yun Jin's opera performance)

Lyrics (line-for-line):

可——叹—— (kě —— tàn ——)
Alas!

秋鸿折单复难双 (qiū hóng zhé dān fù nán shuāng)
Two loving souls by death cruelly parted

痴人痴怨恨迷狂 (chī rén chī yuàn hèn mí kuáng)
In madness and grief, a dark path started

只因那邪牲祭伏定祸殃 (zhǐ yīn nà xié shēng jì fú dìng huò yāng)
Calamity was drawn, rituals subverted

若非巾帼拔剑人皆命丧 (ruò fēi jīn guó bá jiàn rén jiē mìng sàng)
But by her cold steel was death averted

凡缘朦朦仙缘滔 (fán yuán méng méng xiān yuán tāo)
Mortal ties broken, with the adepti she went

天伦散去绛府邀 (tiān lún sàn qù jiàng fǔ yāo)
Their abiding place filling paternal bonds rent

朱丝缚绝烂柯樵 (zhū sī fù jué làn kē qiáo)
To her red strings of binding they sent

雪泥鸿迹遥 (xuě ní hóng jī yáo)
And they dwelt long together content

鹤归不见昔华表 (hè guī bú jiàn xī huá biǎo)
The crane returned to a home without luster

蛛丝枉结魂幡飘 (zhū sī wǎng jié hún fān piāo)
The cobwebs overgrown, the grave-shrouds a-fluster

因果红尘渺渺 (yīn guǒ hóng chén miǎo miǎo)
But one bond upon her

烟消 (yān xiāo)
This world could not muster

《神女劈观》到这里本该接近尾声 (shén nǚ pī guān dào zhè lǐ běn gāi jiē jìn wěi shēng)
Thus does the Divine Damsel's tale duly end

但今日我再添一笔—— (dàn jīn rì wǒ zài tiān yī bǐ ——)
But today a new tale I have to append

唱与——诸位——听—— (chàng yǔ —— zhū wèi —— tīng ——)
Which I shall now tell — if you shall attend

曲高未必人不识 (qū gāo wèi bì rén bù shí)
From the world she seems apart

自有知音和清词 (zì yǒu zhī yīn hé qīng cí)
But there are those who know her heart

红缨猎猎剑流星 (hóng yīng liè liè jiàn liú xīng)
With crimson spear and flashing brand

直指怒潮洗海清 (zhí zhǐ nù cháo xǐ hǎi qīng)
To still the raging tides they stand

彼时鹤归 (bǐ shí hè guī)
The crane once returned

茫茫天地无依靠 (máng máng tiān dì wú yī kào)
And once, she was spurned

孤身离去 (gū shēn lí qù)
She turned, and left alone

今日再会 (jīn rì zài huì)
Now, she might be found

新朋旧友坐满堂 (xīn péng jiù yǒu zuò mǎn táng)
With friends all around

共聚此时 (gòng jù cǐ shí)
To whom she is bound — a home

Line-by-line Analysis:

Let me begin with a disclaimer. I am Chinese-literate, and I was an English literature and English linguistics student back in school. However, Chinese literature is something of another beast altogether so please understand if this analysis is... less than ideal.

> 可——叹——

I would like to point out that 可叹 is coincidentally the title of a poem by 杜甫, written during the Tang Dynasty. Other than that, this opening line literally means “w

... keep reading on reddit ➡

👍︎ 275
💬︎
📅︎ Jan 05 2022
🚨︎ report
[The Ambassador] Part 3 - Rust

Part2 | [Part 4]

#Transit Nala quickly fell into a regular routine which involved spending a lot of time with Mark in the recreation rooms trying different games and activities. Mark, with no patients to doctor, also had little to do, which was a relief to Nala because apparently only Mark, Ms. Mwangi, and the Captain spoke rladii. Nala couldn’t read any of the text or understand any of the spoken language in video media, so she was limited in which entertainment she could enjoy and who she could enjoy it with.

Nala ate her meals with Mark so he could help her with the menu. Sometimes the Captain or Ms. Mwangi joined them in the galley, but the rest of the crew seemed to mostly eat at different times. About four days into the trip, while the four of them were eating together, Nala noticed that while Mark was using a set of three metal utensils to eat his pasta dish as she had seen him do before, the Captain was coordinating two rods in one hand to eat his mix of rice and vegetables, and Ms. Mwangi was eating a variety of items by tearing off a piece of some kind of flat bread and pinching the food items within it. Mark informed her that this was a pretty good example of the three basic ways humans eat (flatware, chopsticks, and fingers) while explaining that each technique was favored by roughly one third of the human population. On a ship like the Killdeer, with such an eclectic crew, each crew member was likely to switch between methods, depending on exactly what they were eating at the time. Mark struggled a little bit to explain it, because "eclectic" was not a word the homogeneous rladii had a direct translation for. But, looking around the table, Nala was beginning to understand the concept.

Since they were on the topic of what they were eating, Nala took the opening to ask about something that was bothering her: she depended on Mark to order food for her at each meal. There were a lot of humans on the ship (rather more than Nala though should be required for a freighter of this size, actually) but apparently only the three currently at the table with her could speak rladii. Nala inquired if they would be willing to work with her to help her learn to speak and read Human. That brought a laugh out of Mark and the Captain almost choked on his food. It was Ms. Mwangi that came to her rescue. "Humans speak over six thousand different languages. However, many people

... keep reading on reddit ➡

👍︎ 58
💬︎
📅︎ Jan 12 2022
🚨︎ report
Should I fill the 2nd CPU in a Xeon server, or just double the RAM?

I'm trying to decide how I should extend my Intel server. I have a 2-node server (Lenovo SR650) with a single 6248R and 512GB of RAM (in 4x128GB sticks). I do CPU/RAM intensive simulations where I can use about 10 cores at a time for the 512GB of RAM. The simulations are embarrassingly parallel, so little need of communication between CPU cores. To double the number of simultaneous runs, I need to double the RAM. But I'm wondering if I should also add a 2nd Xeon 6248R to my server.

From previous runs with other computers, I assume just doubling the RAM will only increase scenario throughput by maybe 50 or 60%. But I'm wondering if adding a 2nd CPU will push that to 80% or more increase.

In summary: I have to get 4x128GB more RAM, but do I also add a 2nd CPU?

Thanks for any insights!

👍︎ 2
💬︎
👤︎ u/buggaby
📅︎ Jan 06 2022
🚨︎ report
please dont hate me.

after several months, i finally mustered up the courage to tell you almost everything i wanted to tell you. it was a long letter. in fact it was embarrassingly long, so long that i had to break it up into several smaller messages. i sat with my thumb hovering over the "send" button longer than i perhaps should've, because i knew that this letter would irreparably change the course of the future—our future—if we even had one. after i sent the first one, the adrenaline kicked in and i sent the rest in quick succession. i really hope that your phone was on silent.

i tried hard to seem like i didnt care much about you (the strange nature of the situation meant that i was scared to show even a sliver of emotion so i had to be as objective as possible) but i worry that i overdid it and came off a bit too aloof, verging on unsympathetic. but i do care, i promise.

i ended up muting the notifications from our chat because seeing your name gives me butterflies and my stomach is already full of them.

at the beginning of the letter i told you that i was partly sending it because i didnt have much to lose. i was wrong. or at least i think so.

we werent exactly friends because we didnt hang out and we didnt really depend on eachother. but we werent quite strangers, because in those few times we met, i feel like i learned everything i needed to know about you.

it feels like we're two people on two completely different paths that still manage to run parallel to one another. sometimes, looking at you feels like looking in a mirror.

of course, i couldnt tell you all of this at once. maybe another letter is in order.

i dont know when you'll get round to reading the whole letter, but please dont hate me. i dont wish to cause you pain but if deep down you feel the same way that i do, i know that itll be hard to read and even harder to digest. ive tried to make things easier by making the first move. i really hope that you trust me enough to be open and honest. i wont be upset.

time seems to be moving especially slowly. and like a child on christmas eve who sits by the tree and watches their presents, i find myself staring at my inbox, wondering when you'll respond. it might be days, weeks, even months. but either way, ill be here, waiting for you with an open heart, regardless of what your response may be.

👍︎ 14
💬︎
📅︎ Jan 11 2022
🚨︎ report
How much potential is there for pay-as-you-go serverless scientific computing?

Examples of serverless computing are Amazon Lambda, Google Cloud Functions and Azure Serverless.

Scientific computing are conducted by researchers and engineers. A headache of scientific computing is each full scale iteration before the researcher can view the result takes long CPU hours, making testing and debugging only feasible in smaller surrogate scales.

Thus, more and more researchers are using serverless clouds to dynamically summon a high surge of computational power and release them when the full-scale iteration finish running.

For computation tasks without sensitive data or code, such as protein folding and public domain data crunching, there is BOINC and Gridcoin to facilitate distributed scientific computing and allow anyone to donate or sell computation power for Gridcoin, but BOINC's programming interface is not as easy to use as Amazon and Google's. In addition, BOINC requires manual approval of projects, because BOINC agent is not containerized/virtualized, computing code could present malware threat to host.

I wish to create an alternative to BOINC/Gridcoin, using container/VM images as computation tasks, and support unmonitored pay-as-you-go serverless computing.

I want to know if there is a large enough market for it.

Think this way, the world mines 6 bitcoins/day (US$42K/day, or US$15.3 million/year). Do all "embarrassingly parallel" scientific computing projects combined have comparable annual revenue with BTC or a lesser cryptocurrency such as ETH or DOGE?

👍︎ 8
💬︎
👤︎ u/larryliu7
📅︎ Jan 11 2022
🚨︎ report
How do you guys deal with racism?

I didn't know where else I could look for advice, want to know your opinion

I am from India and I have always known that racism exists everywhere and so far I have only faced very mild situations. LIke one time when I was living in Singapore, the metro guard would specifically call me and my friend for a "random check" and goes through our bag.. He was an Indian too and didn't have the guts to that to any other national.

Today it was a whole new level to me, although it might seem like just another mild incident. I have never directly been confronted or addressed to regarding my color or religion or nationality. I was on my way to the police district near Grønland to get my residence card, and this one tall man (who I knew was whispering something) walked right in front of me. This was right after I got off the Grønland metro. He immediately shouts the "N" word and screams on top of his voice, and proceeds to just walk along. I was a little afraid (being a short guy makes me feel like prey sometimes.. around bullies) and waited till he walked a few steps .. and then he manages to slip on the ice and falls. This is when he starts the never ending "N", "MF", "F off" s. And it was a small lane, I just wanted to go past him .. and he screams even louder now.. the same insults. I stood there embarrassed, afraid and looking for help or something of sense (I was shocked and didn't know how to respond). The most sensible thing I could think of was to cross the road and just head to my destination on the map. He continues walking parallel to me and goes on with his racist slurs and insults, and finally headed into some building called Galler .. I'm not sure I was panicking already. I was warned of mugging by one of my friends and I didn't want to hang around till he pulls out a freaking gun or who knows (I know that's extreme but man I was not mentally prepared for this)

All I thought I understood was .. I gotta keep out or Grønland as much as possible. (But my beloved Indian restaurants are just there) which is nowhere the answer. I want to know how you guys deal with racism, be it of any level. I do understand the whole situation was not panic worthy or worrisome .. but this being my first time ever to be on the receiving end of the racist slurs and insults.

TL;DR- Faced racist insults in the public and didn't know how to digest the whole scenario. Wanted to know how you guys revisit such places where things happen .. and how to avoid such in the future.

... keep reading on reddit ➡

👍︎ 101
💬︎
👤︎ u/aarshta
📅︎ Jan 05 2022
🚨︎ report
In a parallel world where watching others poop is not embarrassing at all, you can take a shit in front of people no problem.
👍︎ 92
💬︎
👤︎ u/hiha64
📅︎ Oct 30 2018
🚨︎ report
China's biggest budget movie of all time, the recently released "Battle at Lake Changjin" is literal propaganda glorifying war against the United States.

The trailer for this movie portrays the Chinese side as heroic, though I must admit it's funny watching all the CGI scenes of Chicom troops getting the shit bombed out of them.

>The film's story was commissioned by the publicity department of the Chinese Communist Party and announced as part of the 100th Anniversary of the Chinese Communist Party. The film has grossed $905 million at the worldwide box office, making it the second-highest-grossing film of 2021;[13] the highest-grossing film in Chinese cinema history;[2][14] and the highest-grossing non-English film.

The film is based on the Battle of Chosin Reservoir. This is seen by normies as a US defeat, but it was absolutely not one. While the US ultimately decided to withdraw from the Northeast theater at Hungnam for logistic reasons, it was not forced to do so for military reasons.

Despite overwhelming odds, being ambushed and outnumbered 4 to 1, the USMC fought like hell and utterly crushed the Chinese forces attempting to trap and encircle them.

>"Retreat, hell! We're not retreating, we're just advancing in a different direction!" [1st Marine Division commander Oliver P. Smith]

Just look at the wikibox:

Strength: US Committed: ~30,000 / Chinese Committed: ~120,000

Casualties: US: 1,029 killed. Meanwhile the Chinese lost 50,000-60,000 total combat strength, taking the entire PLA 9th Army out of action for months.

The Battle of Chosin Reservoir stands as a testament to a complete and total disregard for human life that the Chinese Communist Party showed to its own troops. Despite extremely harsh winter conditions, the CCP leadership drove its men like slaves into positions to bypass and try to surround the USMC units, with the aim of trapping and pocketing them.

Then, with the USMC units cut off, rather than fighting strategically and trying to dig in and pocket the Marines, the PLA leadership forced its troops into aggressive human wave attacks, hoping to overwhelm and annihilate the Americans before UN forces could react. Very early on, this approach had utterly failed, as the Marines dug in and absolutely mowed down the attackers.

As soon as the weather cleared and the US could finally employ air power (which it could not do early on), the USMC units began a breakout south which absolutely annihilated the PLA units sent to block them. By this time, the USMC units knew to

... keep reading on reddit ➡

👍︎ 126
💬︎
📅︎ Jan 14 2022
🚨︎ report
Blind Girl Here. Give Me Your Best Blind Jokes!

Do your worst!

👍︎ 5k
💬︎
📅︎ Jan 02 2022
🚨︎ report
This subreddit is 10 years old now.

I'm surprised it hasn't decade.

👍︎ 13k
💬︎
📅︎ Jan 14 2022
🚨︎ report
Dropped my best ever dad joke & no one was around to hear it

For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.

I said "hey look, an escaPEA"

No one near me but it didn't half make me laugh for a good hour or so!

Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies 😂

👍︎ 19k
💬︎
📅︎ Jan 11 2022
🚨︎ report
What starts with a W and ends with a T

It really does, I swear!

👍︎ 6k
💬︎
📅︎ Jan 13 2022
🚨︎ report
Gem for reminding us to revisit some parts of code

Hi everyone!

Although I used ruby for couple of years now I realised I haven’t created any gem, ever, so I decided to change that, so here it is:

https://rubygems.org/gems/remind_me

Short Description:

In my current job we are working with monolithic, kinda-big rails app, having around 5k .rb files in it.

One thing I noticed is that we have no systematic way of saying “Yes, I’m doing this monkey patch now, but we might not need it anymore if version X of gem Y fixes this”, “Method X is available in Rails/ActiveRecord/X version X, but we are using X-n, so lets reimplement it here and delete it after we switch” or “check performance of this method once we switch to use gem X, we might need to optimise it then, using features from X”. So even after changes occur, these small code snippets/patches/workarounds stay there, as no one now remembers that we should revisit those 😀.

This gem adds a rake task to Rails app that scans all `*.rb` files in app directory and checks to see if there are any comments beginning with `REMIND_ME:`, and if there are, it parses those comments to check if the conditions hold: if they do, rake task aborts and displays appropriate message. Intention is to use this task in CI environment (or pre-commit hook?) so that we can be reminded of all those code snippets we flagged some time ago. Then its up to us to either update (using example from earlier, monkey patch is still needed) or delete comment (monkey patch no longer needed)

It still has lots of rough edges and can be polished, but it performs kinda ok: on my local 13” 2020 macbook pro I’m able to scan all ~5k files in ~17s. Most of the work is embarrassingly parallelisable, so there is that (it finishes in around ~6s using parallel gem).

Let me know what you think and cheers!

👍︎ 21
💬︎
📅︎ Dec 12 2021
🚨︎ report
Historical backgrounders for "The Red Sleeve" (2021) and "Yi San" (2007) with parallels and differences between these dramas

Index: A. Introduction; B. "Yi San" aka "Lee San, Wind of the Palace" and PD Lee Byung-hoon, "King of the sageuks"; C. "The Red Sleeve"; D. Historical figures in "Yi San" and in "The Red Sleeve"; E. Parallels and differences between “Yi San” and “The Red Sleeve”; F: Miscellaneous backgrounders — In Ep. 7, why did Court Lady Seo look down before asking Deok-im if she's alright?; Yi San’s iconic words “I am the son of Crown Prince Sado!”; King Yeongjo is in a bad mood because >!he washed his ears!<; King Yeongjo and King Jeongjo (Yi San) wearing eyeglasses; Yi San's fan; The painting of a yellow cat in Yi San's library; Inspiration for “Gwanghang Palace”; Persimmons, marinated crabs, and the Musin Rebellion against King Yeongjo; Mount Geumgang, Hong Deok-ro, and Deok-im; Wrong costume for Queen Jungsoon in the silk cocoon ceremony in Ep. 6 of "The Red Sleeve"?; Warming up Yi San's bed; G. If you enjoyed watching "The Red Sleeve" but don't have the time or patience to watch all 77 episodes of "Yi San," maybe you can just watch some great episodes from "Yi San."

A. Introduction

Yi San and Royal Noble Consort Uibin Seong, you rock!

I wrote this discussion and analysis for the following groups of people:

(1) Those who are curious about "The Red Sleeve" because of the buzz and hype about the drama;

(2) Those who have watched "The Red Sleeve" and have become interested to know more about the lives and times of Yi San aka King Jeongjo, Royal Noble Consort Uibin Seong, King Yeongjo, Queen Jungsoon, Princess Hwawan, Hong Deok-ro, Lady Hyegyeong, etc;

(3) Those who have become interested in watching "Yi San" because of "The Red Sleeve" but are intimidated by its 77 episodes; and

(4) Those who love romance dramas but aren't fans of historical dramas.

If you belong to the 4th group, please do give "The Red Sleeve" and "Yi San" a chance. To give you an idea of how incredibly romantic these dramas are and how they will make you cry your eyes out for the next three months or so, here are scenes from "The Red Sleeve" (JPG and GIF) and from "Yi San" (GIF).

You can also watch the official trailer for "The Red Sleeve" and an MV for "Yi San" (but watch only up to the 3:09 mark of the MV because the rest are spoilers).

... keep reading on reddit ➡

👍︎ 171
💬︎
📅︎ Jan 09 2022
🚨︎ report
What do you call quesadillas you eat in the morning?

Buenosdillas

👍︎ 12k
💬︎
📅︎ Jan 14 2022
🚨︎ report
I just fucked up back to back parallel parks, the second one wasn't even difficult, i am unbelievably embarrassed

There was like 6 extra feet of space and i still needed to make like a 7 point turn to get in right

👍︎ 4
💬︎
📅︎ Jun 24 2021
🚨︎ report
If you ever feel really down or embarrassed about something silly like tripping over in public, just think, in another parallel universe that version of you landed on their neck or something dramatic and died. But you didn’t! You’re still here right now!
👍︎ 3
💬︎
📅︎ Feb 08 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.