Simple inverted index, database implemantation. github.com/foxql/database
πŸ‘︎ 65
πŸ’¬︎
πŸ‘€︎ u/foxql
πŸ“…︎ Apr 12 2021
🚨︎ report
[BBall Index] Using the Leaderboards tool in our Player Profiles database, here are the only players this season in the top 5% of NBA players in both our Playmaking and Finishing talent grades: Luka Doncic, James Harden, LeBron James, Ja Morant, Shai Gilgeous-Alexander, Kyrie Irving, De'Aaron Fox twitter.com/the_bball_ind…
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/sercialinho
πŸ“…︎ Apr 20 2021
🚨︎ report
According to the (bball-index) LEBRON database, this is the most impactful season for an individual player for each team in the past 12 years. CP3 and Harden show up for two teams. Lebron shows up for 3. Bizarre inclusions include Josh Smith for Atlanta (11/12) and Tyson Chandler for NYK (12/13)

Atlanta: Josh Smith (11/12)

Boston: Kevin Garnett (10/11)

Brooklyn/NJ: James Harden (20/21)

Chicago: Jimmy Butler (16/17)

Charlotte: Kemba Walker (17/18)

Cleveland: Lebron (09/10)

Dallas: Luka (19/20)

Denver: Jokic (20/21)

Detroit: Andre Drummond (15/16)

Golden State: Steph Curry (15/16)

Houston: James Harden (19/20)

Indiana: PG13 (13/14)

LAC: CP3 (12/13)

LAL: Lebron James (20/21)

Memphis: Mike Conley (12/13)

Miami: Lebron James (12/13)

Milwaukee: Giannis (19/20)

Minnesotta: KAT (16/17)

New Orleans: CP3 (10/11)

New York: Tyson Chandler (12/13)

OKC: Durant (13/14)

Orlando: Dwight Howard (10/11)

Philadelphia: Joel Embiid (20/21)

Phoenix: Marcin Gortat (11/12)

Portland: Damian Lillard (18/19)

Sacramento: DeMarcus Cousins (13/14)

San Antonio: Kawhi Leonard (16/17)

Toronto: Kyle Lowry (15/16)

Utah: Rudy Gobert (16/17)

Washington: John Wall (14/15)

πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/EatDeeply
πŸ“…︎ Mar 24 2021
🚨︎ report
Rails is not slow, but your database probably is. How a single index can boost your app's performance

This is not a blog post, just a quick win on a boring Sunday.

Recently I opened my app to more users. I've got around 1000 signups and things instantly became quite slow. I was expecting this btw, my database is purposely not optimized.

My main goal with thisdatabase app is to learn, among other things. So I wanted to hit a bottleneck before I started optimizing my datbase.

irb(main):006:0> Game::ActivityFeed.count
   (136.9ms)  SELECT COUNT(*) FROM `activity_feeds`
=> 336763

# Before add_index :activity_feeds, [:event_id, :event_type, :identity_id, :identity_type, :collection_id, :collection_type], unique: true, name: :idx_unique_event_identity_and_collection, if_not_exists: true

irb(main):003:0> sql = "SELECT `activity_feeds`.* FROM `activity_feeds` WHERE `activity_feeds`.`identity_type` = 'PlayStation::Identity' AND `activity_feeds`.`identity_id` = 18 AND `activity_feeds`.`collection_type` = 'PlayStation::Collection' AND `activity_feeds`.`collection_id` = 394 AND `activity_feeds`.`event_type` = 'PlayStation::Trophy' AND `activity_feeds`.`event_id` = 89487 AND `activity_feeds`.`activity_type` = 'Trophy' ORDER BY `activity_feeds`.`earned_at` DESC LIMIT 1"
irb(main):004:0> ActiveRecord::Base.connection.exec_query(sql)
  SQL (17012.6ms)  SELECT `activity_feeds`.* FROM `activity_feeds` WHERE `activity_feeds`.`identity_type` = 'PlayStation::Identity' AND `activity_feeds`.`identity_id` = 18 AND `activity_feeds`.`collection_type` = 'PlayStation::Collection' AND `activity_feeds`.`collection_id` = 394 AND `activity_feeds`.`event_type` = 'PlayStation::Trophy' AND `activity_feeds`.`event_id` = 89487 AND `activity_feeds`.`activity_type` = 'Trophy' ORDER BY `activity_feeds`.`earned_at` DESC LIMIT 1

# After add_index :activity_feeds, [:event_id, :event_type, :identity_id, :identity_type, :collection_id, :collection_type], unique: true, name: :idx_unique_event_identity_and_collection, if_not_exists: true

irb(main):003:0> sql = "SELECT `activity_feeds`.* FROM `activity_feeds` WHERE `activity_feeds`.`identity_type` = 'PlayStation::Identity' AND `activity_feeds`.`identity_id` = 18 AND `activity_feeds`.`collection_type` = 'PlayStation::Collection' AND `activity_feeds`.`collection_id` = 394 AND `activity_feeds`.`event_type` = 'PlayStation::Trophy' AND `activity_feeds`.`event_id` = 89487 AND `activity_feeds`.`activity_type` = 'Trophy' ORDER BY `activity_feeds`.`earned_at` DESC LIMIT 1"
irb(main):00
... keep reading on reddit ➑

πŸ‘︎ 56
πŸ’¬︎
πŸ‘€︎ u/pacMakaveli
πŸ“…︎ Mar 28 2021
🚨︎ report
Wrestling Survives Database 1.0 by zerogaming is available in the Pro Wrestling Sim Download Index bethebooker.net/thread/75…
πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ May 15 2021
🚨︎ report
[BBall Index] Top 3PT Shot Making seasons in our 2013-21 database: 1. Steph Curry in 2015-16, 2. Steph Curry in 2018-19, 3. James Harden in 2018-19, 4. Davis Bertans in 2019-20, 5. Steph Curry this season. Steph Curry owns 6 of the top 13 seasons in the database. twitter.com/The_BBall_Ind…
πŸ‘︎ 277
πŸ’¬︎
πŸ‘€︎ u/Robotsaur
πŸ“…︎ Mar 06 2021
🚨︎ report
The U.S. Media Index database shows news consumers who owns what niemanlab.org/2021/05/the…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/AngelaMotorman
πŸ“…︎ May 15 2021
🚨︎ report
Wrestling Survives Database 1.0 by zerogaming is available in the Pro Wrestling Sim Download Index bethebooker.net/thread/75…
πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ May 15 2021
🚨︎ report
B+ tree for a database index

From this link:

Since our goal is to minimize disk accesses whenever we are trying to locate records, we want to make the height of the multi-way search tree as small as possible. 

It sounds like the whole B+ tree is on disk. Wouldn't part of it be in memory?

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Apr 07 2021
🚨︎ report
Why researchers created a database of half a million journal editors | Nature Index natureindex.com/news-blog…
πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Apr 14 2021
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0)

Website: https://humannootropicsindex.com

Support the project on Patreon: https://www.patreon.com/humannootropicsindex

For the last three months I've been working on a project I call the HNT (Human Nootropics Index). This project/website crawls PubMed on an hourly basis and gathers all available nootropics literature done in humans. The main motive for this project is to organize and put an end to the scatteredness of nootropics-literature and the lack of a single place with only human studies.

Since my first post (ver. 1.0) 4 weeks ago I have been fixing and dealing with the issues and constructive criticism the project received on the first post. I think I've done a satisfactory job in improving these aspects and hopefully these adjustments will be well received. Below is a rundown of the changes made:

  • Added many more studies
  • Added an SSL-certificate
  • Added patients and infants as a demographic
  • Improved the design
  • Less whitespace
  • Added some categories
  • Added the option for people to comment, add studies themselves, vote
  • Added trending studies-functionality
  • Improved the search-function and made it not lag on phones.
  • Made the website mobile friendly
  • Increased the crawler-frequency to hourly from previously weekly
πŸ‘︎ 1k
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Sep 30 2020
🚨︎ report
For Oracle databases, would the use of an SSD make the optimizer more or less likely to use an index?

I’m curious if anyone here has experience with this and knows if the optimizer still uses indexes when using an SSD vs just doing a full table scan.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/TrumpPaid750
πŸ“…︎ Mar 12 2021
🚨︎ report
Index usage differences of the same database in two different environments with similar data

This is my query:

SELECT 
  COUNT (*) AS total_orders,
  COUNT (DISTINCT courier_id) AS total_couriers 
FROM  orders 
WHERE courier_id IS NOT NULL AND deleted_at IS NULL

The index:

CREATE INDEX orders_courier_id_deleted_at_index 
ON           orders (courier_id, deleted_at) 
WHERE        courier_id IS NOT NULL AND deleted_at IS NULL

There are also single BTREE indexes for courier_id and deleted_at columns.

Explain results for DB-dev:

Aggregate  (cost=370847.87..370847.88 rows=1 width=16) (actual time=2611.879..2611.880 rows=1 loops=1)
  ->  Index Only Scan using orders_courier_id_deleted_at_index on orders  (cost=0.43..360762.16 rows=2017143 width=8) (actual time=0.081..2144.635 rows=2000121 loops=1)
        Heap Fetches: 348052
Planning Time: 0.492 ms
JIT:
  Functions: 2
  Options: Inlining false, Optimization false, Expressions true, Deforming true
  Timing: Generation 0.670 ms, Inlining 0.000 ms, Optimization 0.176 ms, Emission 2.493 ms, Total 3.339 ms
Execution Time: 2612.701 ms

Explain results for DB-production:

Aggregate  (cost=547307.43..547307.44 rows=1 width=16) (actual time=29421.264..29421.264 rows=1 loops=1)
  ->  Seq Scan on orders  (cost=0.00..537286.22 rows=2004242 width=8) (actual time=0.007..28701.857 rows=1995930 loops=1)
        Filter: ((courier_id IS NOT NULL) AND (deleted_at IS NULL))
        Rows Removed by Filter: 92692
Planning time: 9.859 ms
Execution time: 29422.512 ms

I checked and indexes exists in both databases. Both were created with the same scheme anyway. As you can see, the number of records is also very close to each other. Actually, they are all the same record, but the dev server is 1 day behind.

What could be the reason for the difference? I just couldn't find it ... Any help would be appreciated.

Thank you.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/mgsmus
πŸ“…︎ Jan 26 2021
🚨︎ report
I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index)

For the last two months I've been working on a project I call the HNT (Human Nootropics Index). This project/website crawls PubMed weekly and gathers all available nootropics literature done in humans. The main motive for the project was to organize and put an end to the scatteredness of nootropics-literature and the lack of a single place with only human studies.

Personally, I've found myself wasting too much time scrolling through irrelevant PubMed articles, reddit posts and animal studies. Although I do understand the necessity and use of rodent-studies, I simply cannot get myself to take substances based on studies done in non-human subjects. I can't argue that there isn't a place for animal-studies, but for the sake of not wasting money and risking my health, I've chosen to only base myself on research done in humans.

The project took longer than I expected and there are still some final touches that need attendance, but for the most part, it's done. I hope this website is useful to some of you. Here is the project:

http://humannootropicsindex.com (website is under maintenance atm)

πŸ‘︎ 655
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Sep 04 2020
🚨︎ report
How to find and fix a missing database index on a RoR app scottbartell.com/2021/02/…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ScottBartell
πŸ“…︎ Feb 24 2021
🚨︎ report
23,600 hacked databases have leaked from a defunct 'data breach index' site zdnet.com/article/23600-h…
πŸ‘︎ 567
πŸ’¬︎
πŸ‘€︎ u/eis3nheim
πŸ“…︎ Nov 05 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 156
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 01 2020
🚨︎ report
Will you please measure your Index FOV and upload results to database, so we can crowdsource this... use the exact settings / setup as you're regularly using it. twitter.com/ENiKS_CZ/stat…
πŸ‘︎ 124
πŸ’¬︎
πŸ‘€︎ u/ENiKS-CZ
πŸ“…︎ Sep 01 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 186
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 02 2020
🚨︎ report
TIL about the Roud Folk Song Index, a database combining 250,000 references to thousands of traditional English language folk songs. It was started as a personal project by Steve Roud in 1970 on 3x5 index cards, and was computerized in 1993. vwml.org/roudnumber/
πŸ‘︎ 105
πŸ’¬︎
πŸ‘€︎ u/TheTacoWombat
πŸ“…︎ Dec 24 2020
🚨︎ report
For an Index page, should I calculate this info on the fly, or store it in the database?

Hey there, I am working on a simple invoicing app and for my Clients index view, which will probably show 12-15 records at a time, I want each Client's table line to include some data like:

  • How many outstanding invoices that client has (ie, 23 invoices)
  • How many Paid invoices they have
  • The dollar amount of all their invoices (ie, $18,453.52 total)
  • The dollar amount of outstanding invoices

This seems like it might be a burden to have Rails calculate all this for 12-15 clients per index page, yes? This is my first project, so I'm just not sure, but it seems like it involves a lot of database work every time a user simply looks at their Client index page. So I was going to create a couple of Count values and Totals values columns the Client table, and just change them anytime a new invoice is created/deleted/paid.

Does this seem reasonable?

Thanks much!

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/k2director
πŸ“…︎ Nov 10 2020
🚨︎ report
23,600 hacked databases have leaked from a defunct 'data breach index' site zdnet.com/article/23600-h…
πŸ‘︎ 62
πŸ’¬︎
πŸ‘€︎ u/eis3nheim
πŸ“…︎ Nov 05 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 68
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 01 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 68
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 01 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 74
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 01 2020
🚨︎ report
I'm looking for a database with old Japanese music notation, kind of like the Cantus Index for plainchant, or Gallica etc.

I'm looking for a database with old Japanese music notation, kind of like the Cantus Index for plainchant, or Gallica etc. Doing research, thanks!

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/flutecheck
πŸ“…︎ Nov 17 2020
🚨︎ report
FOV database updates - data from more than 70 people so far, we have 18 headset measured, incl Rift S, Quest, Index, Vive Pro, Pimax, StarVR infinite.cz/projects/HMD-…
πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/ENiKS-CZ
πŸ“…︎ Sep 03 2020
🚨︎ report
ILPT Request: Any other databases for information like Index-of.co.uk/tutorials-2/ ?

Any other databases for information like Index-of.co.uk/tutorials-2/

Http://index-of.co.uk/tutorials-2/ has a tonne of good sources for stuff both illegal and legal. Are there any other good sources like this for anything else?

Or just websites/books not in this or other lists that have good information that isn’t fancied by the government.

Dark Web links are cool here as well but I already know links to a bunch of Dark Net markets. I’m more looking for as mentioned above... information. Or informative sites.

Have a great day y’all and enjoy any links people put below :)

-George

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/CashMoneyOfU_v2
πŸ“…︎ Sep 04 2020
🚨︎ report
Age-adjusted Charlson comorbidity index score is the best predictor for severe clinical outcome in the hospitalized patients with COVID-19 infection: a result from nationwide database of 5,621 Korean patients medrxiv.org/content/10.11…
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/icloudbug
πŸ“…︎ Oct 27 2020
🚨︎ report
IDB - Index DataBase - Database with lots and lots of remote sensing indices! indexdatabase.de/
πŸ‘︎ 48
πŸ’¬︎
πŸ‘€︎ u/geo-special
πŸ“…︎ Jul 30 2020
🚨︎ report
node-postgres-extras: PostgreSQL database performance insights. Locks, index usage, cache hit ratio and more github.com/pawurb/node-po…
πŸ‘︎ 50
πŸ’¬︎
πŸ‘€︎ u/pawurb
πŸ“…︎ Sep 28 2020
🚨︎ report
23,600 hacked databases have leaked from a defunct 'data breach index' site zdnet.com/article/23600-h…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/cellis12
πŸ“…︎ Nov 25 2020
🚨︎ report
The Roud Folk Song Index is a database of around 250,000 references to nearly 25,000 songs collected from oral tradition in the English language from all over the world. Here is a list of all 25,000 songs... en.wikipedia.org/wiki/Lis…
πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/DrawingMusic957
πŸ“…︎ Nov 07 2020
🚨︎ report
Flask-PG-Extras is a Flask extension to help gain insights about your PostgreSQL database which could be useful when trying to analyze and improve database performance. It gives you a few new Flask CLI commands to run to get index usage, buffer cache hit rates, query times, slow queries, etc. github.com/nickjj/flask-p…
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/nickjj_
πŸ“…︎ Aug 11 2020
🚨︎ report
Sprawl index databases

Does anyone know any sprawl indexes that measure sprawl in US metros year over year? I’m doing some research and this would be greatly helpful! Thanks

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/discoverykidz
πŸ“…︎ Oct 09 2020
🚨︎ report
The Graph learns what and how to index Ethereum data based on subgraph descriptions that define smart contracts, the events in those contracts to pay attention to, and how to map the event data to the data that #TheGraph will store in its database

https://thegraph.com/docs/introduction#how-the-graph-works

https://preview.redd.it/shnojj2ysl061.png?width=1200&format=png&auto=webp&s=d35b1c8302e7635119c7fb003368fdad9a03df11

#Web3 #GRT #ETH #API

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/SeaReference
πŸ“…︎ Nov 21 2020
🚨︎ report
PSA: Pro Football Reference's database search tool, Play Index, will be moving behind a paywall in the next few months. If you have any fun statistical research projects on your mind, you might want to get those finished. sports-reference.com/blog…
πŸ‘︎ 49
πŸ’¬︎
πŸ“…︎ May 21 2020
🚨︎ report
UPDATE: I've crawled the entire PubMed database and collected all available literature on nootropics done in humans, here are the results (Human Nootropics Index 2.0) /r/Nootropics/comments/j2…
πŸ‘︎ 51
πŸ’¬︎
πŸ‘€︎ u/vengeancefit
πŸ“…︎ Oct 01 2020
🚨︎ report
Data Breach Index Site Leaks Over 23,000 Hacked Databases Exposing Over 13 Billion User Records - CPO Magazine cpomagazine.com/cyber-sec…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/KeyDutch
πŸ“…︎ Nov 13 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.