How do you guys run code that does schema changes on the database (AKA Migrations)?

Many ORMS, and languages have migration support for database schema changes which need to be ran everytime you want to apply changes to your database. I'm used to doing this on Heroku as part of their release process but not sure how to do it with AWS. Is there a concept of a one-off instance to run that migration? Maybe spin up a lambda for that? How do you usually do that?

πŸ‘︎ 46
πŸ’¬︎
πŸ‘€︎ u/up201708894
πŸ“…︎ Jan 23 2022
🚨︎ report
Planning database schema

Every time my team needs a new database schema, it seems like one or two backend people just throw it together based off what they think makes sense in a single conversation. Then anyone working on the frontend just has to make it work. This has been problematic for me when I work on the frontend because every time I "try to make it work", I always feel like I'm missing some kind of association.

How do you or your teams decide on how to create a given database schema? Is it just one person? Is it a team effort? How do you decide what associations to make? Is it normal to update the schema as development goes on?

Edit (clarification): I work on the frontend and a REST API. I do not work on the actual database design.

Edit (answered): It turns out that the problem isn't really the database schema, but actually the result of poor API practices.

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/riboflavinrich
πŸ“…︎ Jan 13 2022
🚨︎ report
Sports database - help with schema

I am building a soccer league database & need help with the schema, before I go to far.

25 years worth of historical results (goals the main thing) in approx. 10,000matches between pairs of teams, where some coaches & players move to & from clubs.

Building a DB to query faster & pull out unique insights for media-based sports content (e.g TV commentators). Can anyone advise if they would change anything, or if there are any major red flags? Tx.

https://preview.redd.it/m4fwm1g4ood81.png?width=1818&format=png&auto=webp&s=e02a0402d4824cc49fab1b60ecac1a4e382601af

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/moyoleee
πŸ“…︎ Jan 24 2022
🚨︎ report
Hey, mongo developers I have made a tool to design your database schema with ease and simplicity, hope you like it. Like πŸ‘ and comment πŸ‘‡ v.redd.it/w4l165hifta81
πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/ayush_kumar5
πŸ“…︎ Jan 10 2022
🚨︎ report
With all of the new database changes, I need to clear or reset my database; anyone know of a way to do this? To be clear, I want to remove the data, not the schema and all.

I'm still injecting indexes, which I agree were needed and should significantly improve performance, though I wish they had samples of code ready to go before launch, I'm still not 100% clear on the exact syntax.

So if anyone knows of a good way to either clear all of my data, or clone an empty version of my project, I would truly appreciate it!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/dosangst
πŸ“…︎ Jan 18 2022
🚨︎ report
How to improve my Database Design Schema?

Hi!

I've created the following schema. It's based around Formula 1.

  • We have seasons. Each season has its own regulations (some rules carry over).
  • Season has race weekends.
  • Race weekend consists of sessions (Practise, Qualifying, Race etc.)
  • Points are only give out in 'Race' session.
  • Event table describes different events that have taken place in sessions. Such as pitstops (which parts were changed) and penalties given for incidents.
  • In statistics table we keep track of career stats for drivers/teams. Ideally it should function as a counter. If driver wins a race, wins column would be updated (+1).

https://preview.redd.it/xn67ocok75581.png?width=5563&format=png&auto=webp&s=c600918c4eaab27dfcfd4d7ec4f8db7437aa69cd

This isn't a complete schema yet. I'm looking for feedback on what to improve since i'm merely a beginner.

Any tips or tricks on how to implement 'Standings' functionality are most welcome. Unless i should access points, position data from 'Session' table and use it to generate driver and constructor standings for each season?

πŸ‘︎ 22
πŸ’¬︎
πŸ‘€︎ u/swinksel
πŸ“…︎ Dec 12 2021
🚨︎ report
Best database schema for storing of a conversation?

Hi all, my application has a real time chat function implemented using socketIO, however, may I ask what is the best schema to store the messages of a conversation? There will be keys such as participant 1's ID and participant 2's ID, should I store the messages as a JSON and constantly update the attribute? And do I store these 3 attributes as a single entry in the database?

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/Joyboynamedtroy
πŸ“…︎ Dec 26 2021
🚨︎ report
Tools to generate a database schema

Is there any tool or library I could use where I could do like a schema first development in Firebase.

For example, something like GQL where I can defined my entity relationships as well as operation on those entities, then have something auto generate the client libraries required to write this to Firebase.

I would like to define my entities like this, and just have a fully type crud client get auto generated.

model User { 
  id: string
  name: string;
  posts: Post[] 
} 

model Post {  
   id: string;
   text: string;
}

Or do I have just have to implement these myself always using low-level constructs.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/versaceblues
πŸ“…︎ Jan 10 2022
🚨︎ report
Does MYSQL8 INFORMATION_SCHEMA.COLUMNS lock database tables?

Would retrieval from Information Schema lock database if I retrieve column names from INFORMATION_SCHEMA.COLUMNS using a query such as "SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS WHERE..."

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Haad145
πŸ“…︎ Dec 29 2021
🚨︎ report
GitHub - rdagumampan/yuniql: Painless database schema version control. Built from experience. RawSql -> Connect -> Apply -> Erase -> Destroy - Repeat. github.com/rdagumampan/yu…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/rdagumampan
πŸ“…︎ Jan 13 2022
🚨︎ report
Jailer: A tool for database subsetting, schema and data browsing wisser.github.io/Jailer/
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/qznc_bot2
πŸ“…︎ Jan 15 2022
🚨︎ report
LDAP Password Hunter: Automated tool to lookup for world-readable secrets in LDAP database building a custom list of attributes at runtime based on the CN=Schema,CN=Configuration github.com/oldboy21/LDAP-…
πŸ‘︎ 184
πŸ’¬︎
πŸ‘€︎ u/oldboy21
πŸ“…︎ Nov 03 2021
🚨︎ report
Some Production SQL Database schemas for a few different businesses. (inc Airbnb and Twitter) as an example of some real world data models. drawsql.app/templates?pag…
πŸ‘︎ 101
πŸ’¬︎
πŸ‘€︎ u/tdatas
πŸ“…︎ Oct 20 2021
🚨︎ report
Best way of keeping local and production database schema in sync without loosing data

I've just started learning Laravel and initially I was very impressed with the migrations system. At first it seemed like the migrations file operated like a database schema and you could just work with that to make ammends to your database. But now I see you run into problems if you have data in your database, since all that goes if when you migrate the changes.

Pretty much every project I've worked on always has a ton of data in the database while you're developing on it, so I can't see any case where I'd frequently be using the rollback and then migrate method.

So I've seen the "artisan make:migration add_fields_to_mytable_table" method where you create a separate migrations file for the changes you want to make leaving the data in your database intact. But this kind of burst my bubble that the migration files represented a nice organised database schema, that wouldn't make sense if you have a bunch of these different migration files all the time. I can see that becoming disorganised quickly.

I'm coming from a pure PHP perspective, the project I last worked on I made a system where any changes I made to tables in my local project could be exported as json with a touch of a button. And then on the production server I could just load that in with a press of a button as well, and all the tables would be updated instantly with no data lost unless I had specifically dropped some fields from my local installation which would be mirrored on production. I was hoping for something better or on par with that.

Am I just too early in my learning to not realise that there's a much better way of updating tables without loosing data accross installations or perhaps missing something about the process I've already learned?


Edit Just for clarification, the data in the local database isn't production data but a bunch of test data used in development

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/tfc224
πŸ“…︎ Nov 02 2021
🚨︎ report
Anyone have resources for walking me through creating a SQL database schema for role based access control in a Node.js/Express application.

I've been researching and have found many ways to create a role based access control system with node.js but most examples are with mongodb and also the way the authorization is implemented at the controller level has varied.

I'm a bit new to node and express. I'm wondering what some of your techniques for implementing a RBAC system or resources you have used to do so. The full tech stack of the app is React/Node.js+Express/Postgres.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/maholeycow
πŸ“…︎ Dec 10 2021
🚨︎ report
Upscheme: Database schema migrations made easy

If you maintain a PHP application relying on a database, you always have to care about upgrading the DB schema to newer version. Doctrine DBAL is great but very low level and you need to write a lot of code to get simple things done.

Make easy things easy going

That was the reason why we've created a package called PHP Upscheme. It using Doctrine DBAL but offers a simple but powerful API to get things done with a few lines of code - schema updates and data migration. Example for creating/updating a table:

$this->db()->table( 'test', function( $t ) {
	$t->id();
	$t->string( 'code', 64 )->unique()->opt( 'charset', 'binary', 'mysql' );
	$t->string( 'label' );
	$t->smallint( 'status' );

	$t->index( ['label', 'status'] );
} );

That's it! Whenever you need to add or change something in the table, you modify it there and Upscheme will update the schema in the next run. If you need to migrate data, you can create a migration task of course.

If the code looks familiar to you: Yes, Laravel uses the same terse way to specify the columns. There are a few more similarities but Upscheme goes beyond and offers it's API to all PHP projects using composer.

Current state

Upscheme is already usable but not yet feature complete, e.g. view handling offered by Doctrine DBAL isn't available using the same easy API yet. Nevertheless, you always have direct access to all low level DBAL methods from the Upscheme objects, you you can use them to manage views like before.

The package is already fully documented has almost full code coverage. We use it in the Aimeos e-commerce framework and it saved us up to 80% of the code we had written before!

Now, we are keen on feedback to see if it also simplifies your life :-)

Have a look at the docs: https://upscheme.org

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/aimeos
πŸ“…︎ Nov 02 2021
🚨︎ report
Database schema logic

Hi!

I've created the following database schema.

  • Car is made up from different parts.
  • Each part has expected life cycle.
  • Every session that the driver participates in causes wear on the parts.
  • Session table has 'laps' column (also probably lap length in meters for example), which should be used to update the part health.
  • If expected = current, then the part should be discarded.

My question is related to the logic. Am i right to use linked table (?) between session and part in order to track the part wear? Should i rather tie the session mileage to the car and through car update the part condition?

PS! I'm a beginner. You're more than welcome to help me in any way even if it sounds too basic for you. Thank you so much!

https://preview.redd.it/wflc5qkzbc581.png?width=1383&format=png&auto=webp&s=5f9f4d22afe847333c7bbfbfbab927f119183d60

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/swinksel
πŸ“…︎ Dec 13 2021
🚨︎ report
Can someone help me understand schemas and how they relate to databases?

I've been using PostgreSQL for over 15 years and yet I never fully understood the term schema or how namespaces are used. This causes some confusion when reading docs.

I understand the general concept of namespaces but first of all is schema just a word for namespace?

Can existing databases be configured with namespaces, can they use multiple namespaces?

Is this valuable for security if you can configure users to have access to the databases in a certain namespace without giving them access to databases?

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/stemid85
πŸ“…︎ Nov 03 2021
🚨︎ report
How to set up schema for my own OHLCV stock and crypto database with InfluxDB?

I have 15 years of intra-day OHLCV data which I want to store in my local machine with InfluxDB (I am new to it!) for retail algo trading research purposes and I would love to seek your advices.

For those not familiar with InfluxDB terminologies, you may refer to this document: https://docs.influxdata.com/influxdb/v2.0/reference/key-concepts/data-elements/.

I have think of two approaches:

1.) Put everything into the same organsiation 'tradedata' which contains a 'stock' bucket and a 'crypto' bucket. Then in the 'stock' bucket, it will contain different database named in tickers ie 'aapl' then the measurement for each of these databases would be all be 'price' with field set set to be 'open', 'high', 'low', 'close', and 'volume' respectively with a 'source' indicating source of data as a tag for each point at a frequency of 1min between each point.

2.) Put everything into the same organsiation 'tradeprices'. Then in the 'stock' bucket, it will contain three databases named '1m', '15m' and '1h', which containing OHLCV data at different frequencies. At each database, the measurement for each of these database would be tickers ie. 'aapl', 'msft' 'btc_eth'... with field set set to be 'open', 'high', 'low', 'close', and 'volume' respectively with a tag set of 'source' indicating source of data and 'type' indicating whether it is stock or crypto for each point.

Do you mind sharing your thoughts on them particularly which one is better for what purposes? My research mainly focus on using on two areas: (1). deep reinforcement learning algo trading (2). news sentiment analysis. I also plan to store some news data in either a seperate organisation or within the same organisation but under different bucket.

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/keeperclone
πŸ“…︎ Sep 29 2021
🚨︎ report
Question about a database schema that supports both password based and social login
  • I am using postgres 13 and I have a doubt regarding database schema

  • I want to support email, password + social login using facebook.... on my web app

  • My current schema looks like this

  • USERS

    • user_id (integer not null)
    • nickname (unique but can be null)
    • email (unique not null)
    • password (can be null)
    • email_verified (boolean not null)
    • enabled (boolean not null)
    • created_at
    • updated_at
    • picture_url (can be null)
  • SOCIAL_TOKENS

    • provider_user_id (varchar id assigned by say facebook that makes user unique on their website)
    • provider_type (facebook,google, github, etc)
    • user_id (foreign key to the users table)
    • access_token (varchar not null)
    • refresh_token (can be null)
    • expires
    • created_at
    • updated_at
  • I want to support both users with email password and ones with social login

  • I cant help but notice that my table has a lot of nullable columns especially the USERS table

  • How do I

    • NORMALIZE the too many nullable columns from USERS table
    • let social login users set a password?
    • let user have multiple emails?
  • Thank you in advance for your help

πŸ‘︎ 4
πŸ’¬︎
πŸ“…︎ Nov 12 2021
🚨︎ report
I made an online tool to generate SQL query scripts for creating new tables. GUI to design and create MySQL database tables, indexes, and schema. coderstool.com/generate-s…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/coderstool
πŸ“…︎ Dec 15 2021
🚨︎ report
Why is my production database not initialized with bundle exec rake db:schema:load

When I run bundle exec rake db:schema:load, I'm getting the dev and test databases initialized, but not the production database. What am I doing wrong? I have all three defined in database.yml:

production:
  adapter: mysql2
  encoding: utf8mb4
  collation: utf8mb4_bin
  reconnect: false
  database: litjunction_prod
  host: [like i'm going to share my credentials]
  username: [like i'm going to share my credentials]
  password: [like i'm going to share my credentials]
  
development:
  adapter: mysql2
  encoding: utf8mb4
  collation: utf8mb4_bin
  reconnect: false
  database: litjunction_dev
  host: [like i'm going to share my credentials]
  username: [like i'm going to share my credentials]
  password: [like i'm going to share my credentials]
  
test:
  adapter: mysql2
  encoding: utf8mb4
  collation: utf8mb4_bin
  reconnect: false
  database: litjunction_test
  host: [like i'm going to share my credentials]
  username: [like i'm going to share my credentials]
  password: [like i'm going to share my credentials]

So how do I initialize the production database?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/dahosek
πŸ“…︎ Nov 29 2021
🚨︎ report
Hi, i have the following database schema and I need to find out witch product will be out of stock in the next 30 days based on last 90 days sales with just one query . Any idea? Idk where to start from
πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/Emi970
πŸ“…︎ Oct 05 2021
🚨︎ report
What's your approach for learning a new database schema - one that you've never worked on before?

Assuming little documentation and help, what is your approach in familiarising the different tables, structure and design of a database? Currently in this situation and looking for advice.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/if155
πŸ“…︎ Oct 22 2021
🚨︎ report
Migration whole database with a lot of schemas.

Hi! I got Postgres database on AWS, I want to migrate the whole data to another AWS Postgres database. How can I dump all schemas and restore them to others? I heard that AWS has a tool to do migrations like that, but I can't find it.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/machosalade
πŸ“…︎ Oct 22 2021
🚨︎ report
I Desperately Need Help! How do I Improve My Database Design Schema?

Hi everyone, I'm currently building a website for client orders of logo design, I already built the front and backend but now I'm stuck with the database, I'm certainly not advanced in this matter but I do know some notions...so I' would love some help in order to complete my project.

I've tried to design the database schema, but I really do not know if it's the right way to do it or can I improve things in here...

So the deal is that we actually have a pricing table which includes "plans", "bundles" and "promotions":

  • A plan is just the logo specifications with some differences between the plans.
  • A bundle is a plan that includes a supplement which could be business card design or a graphical charter...
  • A promotion could be a plan or bundle with a certain discount.

Any advice on how I can improve this database? Thank you all for your help! :)

https://preview.redd.it/uartmrhn2fs71.png?width=1177&format=png&auto=webp&s=d637fd3e950d2ccaae473c409a73bde619f94663

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/Neat_Panda_1347
πŸ“…︎ Oct 09 2021
🚨︎ report
Database Schema Suggestions

I will quickly summarize the question first:

I want to know the optimal schema/approach for making a permission based system. I wanted to know if its recommended to have many to many field, one group and list of permissions.. or have separate group and permission unique pairs. Like if a group has 5 permissions, you should create one row or have 5 rows in database

Context:

So I making authorization service for my application that contains 3 models

Users, Groups, Permissions

Permissions are text based. Permissions are given to groups . User is a part of a particular group.

So before performing operations, I have to check if user.has_permissions('edit_abc')

Permission Model:

CREATE TABLE IF NOT EXISTS permissions (
    permission VARCHAR PRIMARY KEY,
    enabled BOOLEAN NOT NULL DEFAULT 't'
)

Group Model:

CREATE TABLE IF NOT EXISTS groups (

id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, permissions text[] )

User Model:

CREATE TABLE IF NOT EXISTS users (

id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, group_id is_admin )

In Group model, I was trying to have foreign key array to permissions table. However, people on the internet recommend to generally avoid Many to Many Fields. So I was thinking if I should somehow make many to many foreign key array.. or just make separate row for each permission group has

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/tarunwadhwa13
πŸ“…︎ Nov 30 2021
🚨︎ report
Tick 1: Columns for 'hospitals_zones_joined' do not match the database table schema

Within the Accessing the SQL Database subsection, running the example command:

head(conn, 'facilities') throws the error:

ProgrammingError: (1146, "Table 'nigeria_nmis.facilities' doesn't exist")

I figured this must be because facilities is a column, but we are trying to access an entire table from the database, so replacing that with head(conn, 'hospitals_zones_joined') returns the results:

('', 0, '0000-00-00', 'maternal', 'e', 's', 'n', 'phcn_electricity', 'c_section_yn', 'child_health_measles_immun_calc', 'num_nurses_fulltime', 'num_nursemidwives_fulltime', 'num_doctors_fulltime', 'date_of_survey', 'fa', 'co', 0)

('137', 0, '0000-00-00', '', 'F', '', '', 'False', '', '', '', '', '', '2014-03-01', 'HC', 'Ay', 1)

('835', 0, '0000-00-00', 'True', 'T', 'F', '5', 'False', 'False', 'True', '0.0', '0.0', '0.0', '2014-04-13', 'HM', 'Ba', 2)

('5', 0, '0000-00-00', 'True', 'T', 'T', '0', 'False', 'True', 'False', '2.0', '0.0', '1.0', '2014-03-01', 'HX', 'Al', 3)

('427', 0, '0000-00-00', 'True', 'T', 'T', '3', 'True', 'True', 'False', '8.0', '2.0', '2.0', '2014-02-27', 'HO', 'Ob', 4)

Which seem to be some really odd results for the data frame that we loaded from the csv file, but my suspicion is that it comes from the way in which the table schema was created for this example:

CREATE TABLE IF NOT EXISTS \hospitals_zones_joined` (`transaction_unique_identifier` tinytext COLLATE utf8_bin NOT NULL,`price` int(10) unsigned NOT NULL,`date_of_transfer` date NOT NULL,`

...)

This schema does not match the format of the csv file, which starts with column names like this:

'facility_name', 'facility_type_display', 'maternal_health_delivery_services', 'emergency_transport', 'skilled_birth_attendant', 'num_chews_fulltime',...

My question is then whether MariaDB can infer the types / names / lengths of columns in a csv file, or if we need to define the entire 44 fields-long schema on our own (I haven't found any solutions after a quick google search).

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/PastaGatekeeper
πŸ“…︎ Nov 08 2021
🚨︎ report
Docker/K8s database management + deployment during schema changes

Hi,

This is my work account, and my first post! Currently we are using PSQL as our DB , I like it more than MySQL, but I don't have any particular allegiance to a DB engine. The biggest issue that we are facing is migrating to a proper K8s system with a lot of "stateful" data. I am grappling with how we are supposed to manage schema changes to the underlying data as we deploy new services or modify the existing ones. We currently have periods of "downtime", but there aren't periods of major outages or no traffic. We provide charging infra for robots and cars, so we can't have any "downtime" on the core API.

The egotistical answer is "you didn't design your system properly if you can't just add a new column and call it a day" the reality is we have added entire new wings to our business model and some micro-services are being flattened into one more cohesive service.

What I would like to figure out:

1.) How to properly backup our data for production (something feels dirty about using pgdump into an s3 bucket and marking it as 'latest')

2.) How we should properly roll back only a snapshot in time of changes to the DB. AKA we deployed a change that resulted in minor errors and want to restore to just a specific point in time. I don't know if the snapshot functionality in PG is the best, or if we should utilize something external

3.) How should we perform deployments on something like k8s when a schema change is required? And more importantly, how do we do this without an outage or any dataloss? I have read about PSQL's WAL, but again I don't know if its the best options. In proper micro-service fashion, we have 8 services with their own independent PSQL containers running locally to just their app services.

We aren't currently using anything like Amazon's Aurora, and we would like to try and keep our infra a little lighter weight and flexible in-case we need to spin up a new region on a competing data-provider's platform

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Nov 16 2021
🚨︎ report
restagraph: App that dynamically generates REST APIs for a Neo4j database, using a schema defined within the database. github.com/equill/restagr…
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/dzecniv
πŸ“…︎ Nov 15 2021
🚨︎ report
Generating Ent Schemas from Existing SQL Databases
πŸ‘︎ 38
πŸ’¬︎
πŸ‘€︎ u/TheRevisionist_
πŸ“…︎ Oct 12 2021
🚨︎ report
Package that compares an SQLAlchemy ORM to a pre-existing database schema?

Is there a Python package that will compare a live database schema to one defined in an SQLAlchemy ORM?

I have a decent size MS SQL Server instance (6 databases, ~200 total tables) that I need to make an SQLAlchemy ORM for.

SQLAlchemy has very handy reflection which allows it to read information about a table and essentially generate an SQLAlchemy table object from that information.

What I'd like to do is write tests for my ORM that utilize reflection to verify that the ORM tables as written match those that already exist in the database. Has someone already made a Python package that does this?

I'll be using sqlacodegen to initially populate my ORM, but as the live schema changes I need to be able to run a test on my ORM and see where my table objects are no longer up-to-date.

Cross-posted in SoftwareRecommendations Stackexchange

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/dougthor42
πŸ“…︎ Oct 12 2021
🚨︎ report
Business Intelligence database schema in BI tool

Hello, need your help to better structure the data I am working on. Current data infrastructure is :

Data sources: production DB (copy of it), Google Analytics, Salesforce.

Data transformation : data is extracted from the sources and pushed into the BI tool, and transform the data with SQL.

Issue: table inside the BI tool are duplicating, with the same information being redundant. Moreover, similar data transformations are repeated ovwr and over again for different analysis.

Solution in my mind: best solution should be a data warehouse, however it won't be available before Q1 2022. I am thinking to build a database schema in the BI tool, eliminating redundant information. At the same time I want to create views with aggregate data for the most common business cases.

Any suggestion? Any better way to structure things? Best practises?

Thanks all!

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Kokubo-ubo
πŸ“…︎ Oct 20 2021
🚨︎ report
Help with how to build my database schemas?

Hello all.

I am a relative new comer to coding and especially NodeJS. During lockdown I built a project in Django and I wanted to recreate it in Node. I felt that Django took care of too much of the logic behind the scenes and I didn't truly understand what I had done. Where as Node, I have to build everything and it has really developed my understanding.

That being said I am really struggling with modelling my API. With Django I had used an sql database and linked everything by foreign keys. I have been following the Jonas node tutorial and he has covered both embedding documents and referencing, but neither really seem to fit what I need to do.

I am looking to reference a schema of postcodes from my customers schema using Mongoose. I have 6,000+ customers already stored with their postcode and I am looking to reference their postcode in order to be able to access the Geospatial data I have stored on a Postcode schema. However I can't seem to do it without knowing the ObjectID for each particular postcode.

I am sure this issue must come up all the time, but I cant really find any guides or tutorials for managing this solution.

Please let me know if you need me to post any schema code, etc but as this issue is more conceptual, I didnt think it was ness.

Thanks,

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/tcfcfc
πŸ“…︎ Oct 25 2021
🚨︎ report
Me, a database schema design intellectual
πŸ‘︎ 114
πŸ’¬︎
πŸ‘€︎ u/lukaseder
πŸ“…︎ Aug 11 2021
🚨︎ report
Question about a database schema that supports both password based and social login
  • I am using postgres 13 and I have a doubt regarding database schema
  • I want to support email, password + social login using facebook.... on my web app
  • My current schema looks like this
  • USERS
    • user_id (integer not null)
    • nickname (unique but can be null)
    • email (unique not null)
    • password (can be null)
    • email_verified (boolean not null)
    • enabled (boolean not null)
    • created_at
    • updated_at
    • picture_url (can be null)
  • SOCIAL_TOKENS
    • provider_user_id (varchar id assigned by say facebook that makes user unique on their website)
    • provider_type (facebook,google, github, etc)
    • user_id (foreign key to the users table)
    • access_token (varchar not null)
    • refresh_token (can be null)
    • expires
    • created_at
    • updated_at
  • I want to support both users with email password and ones with social login
  • I cant help but notice that my table has a lot of nullable columns especially the USERS table
  • How do I
    • NORMALIZE the too many nullable columns from USERS table
    • let social login users set a password?
    • let user have multiple emails?
  • Thank you in advance for your help
πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Nov 10 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.