A list of puns related to "Portfolio optimization"
I'm not sure how common this is, but this is a strategic mistake I realized I'm making and I think that it might be worth thinking about.
Geeking out about portfolios and studying and backtesting and so on is fun, I know, and I certainly agree that it is important to take the time to design a portfolio that is compatible with one's objectives and risk tolerance; but, as with many other things, extra effort has diminishing returns and there soon comes a time when your time (heh) and work would be better spent elsewhere.
Case in point: lately I have been obsessing about whether to increase my bond allocation by 5% now or 2/3 years from now as per my initial plan (this has nothing to do with the market being "high", I'm just concerned that I might be overestimating a little my risk tolerance), and I spent way more time thinking about this than putting extra work towards a certification that could increase my wages by about 10%.
I... don't think I have to spell out why this is not the best use of my time and effort. So yeah, it might be worth it to keep in mind that it is suboptimal to overallocate one's resources towards portfolio optimization.
Now if I know myself, the danger is that I'll spend the next four months designing and refining the perfect allocation of my time and mental efforts towards my various work and life objectives rather than doing what I am supposed to do... :-)
Once I have two efficient portfolios how do I calculate their covariance? I don't think the way I did it is correct.
maxsr<-c(max_sr$Risk,max_sr$Return)
minvar<-c(min_var$Risk,min_var$Return)
p_cov<-cov(minvar,maxsr)
Edit full code:
#loading libraries
install.packages("DT")
install.packages("rmdformats")
install.packages("scales")
install.packages("plotly")
library(plotly)
library(scales)
library(rmdformats)
library(tidyverse)
library(tidyquant)
library(lubridate)
library(timetk)
library(knitr)
library(DT)
library(PerformanceAnalytics)
#load excel file with table 2 Chapter 3
library(readxl)
Chapter3_Table_2 <- read_excel("C:/Users/matti/OneDrive/Desktop/Tesi di laurea/CHapter3_Table_2.xlsx")
View(Chapter3_Table_2)
#isolating a string with the EURO STOXX 50 constituents
eurostoxx <- Chapter3_Table_2 [,2]
View(eurostoxx)
#downloading stock info from Yahoo Finance
stoxx_data <- tq_get(eurostoxx
, get = "stock.prices"
, from = "2020-12-24"
, to = "2021-12-24")
#calculating the stocks's logarithmic returns
stoxx_returns <- stoxx_data %>%
group_by(RIC) %>%
tq_transmute(select = adjusted,
mutate_fun = periodReturn,
period = 'daily',
col_rename = 'returns',
type = 'log')
#transforming the data in a time series
stoxx_returns_xts <- stoxx_returns %>%
spread(RIC, value = returns) %>%
tk_xts()
#eliminating rows with N/A values
stoxx_ret <- na.omit(stoxx_returns_xts)
#calulating daily average returns for each stocks
stoxx_means <- as.data.frame(colMeans(stoxx_ret))
colnames(stoxx_means) = paste("Daily Returns")
view(stoxx_means)
#calculating the var-covar matrix
cov_mat <- cov(stoxx_ret)
view(cov_mat)
#calculating annual returns and annual var-covar matrix
stoxx_means_yearly<- stoxx_means*252
cov_mat_yearly<- cov_mat*252
view(stoxx_means_yearly)
view(cov_mat_yearly)
#creating weights to perform optimization
eurostoxx_wts<-transpose(eurostoxx)
view(eurostoxx_wts)
wts<- runif(n=length(eurostoxx_wts))
print(wts)
print(sum(wts))
#imposing the constraint weights sum = 1
wts<-wts/sum(wts)
print(sum(wts))
#calculating the portfolio's returns
#as indicated in Markowitz's model
port_ret<- sum((wts*stoxx_means_yearly))
print(port_ret)
#calculating portfolio's volatility
port_risk<- sqrt(t(wts)%*%
(cov_mat_yearly%*%wts))
print(port_risk)
#creating 100
... keep reading on reddit β‘I am interested in optimizing the performance of my portfolio and have theta strategies as a major part of that. I am interested in hearing about others' portfolio level strategies and getting feedback on my own. My overall goal is to 1.5-2X Spy returns with lower portfolio volatility.
Portfolio Health Metrics
SPY B-Delta: 0.20-0.60%
Buying Power Utilization: 20-65% REG-T
Theta: 0.1-1.0%
Individual equity exposure: Notional<10% NLV; <5% BPu
#1. Long Portfolio
Reason: Hold a long portfolio to gain from meltup. Leverage SPAN margin to free up BPu.
Implementation: Micro futures in NASDAQ, S&P500, and looking at Russel2000 to add. Taking advantage of SPAN margin here. These will be added to maintain portfolio SPY B-Delta between 0.20% and 0.45% allowing it to go as high as 0.60% before managing. Will likely reduce S&P500 exposure here as I'll get it from my index theta below.
Management: Mostly hold and add on as portfolio grows. If portfolio delta gets out of hand from other positions can take off to reduce delta as a last measure.
BPu: 5-10%
#2. Index Theta
Reason: Profit from theta, aiming for 12%/yr.
Implementation: /MES puts for SPAN margin, 1256 tax treatment, and decent liquidity. Open 31-45 DTE every week closing at 75%. Try and open when volatility is elevated
Management: Max contracts are notionally 4X NLV, min 3/4 NLV. I plan to open more or less contracts in a week to stay within NLV guidelines and close losers at -300%
BPu: 5-10%
#3. LEAPS/Shares
Reason: It is possible to see huge returns from uncapped trades. I'm opening these to build a long term B&H portfolio and capitalize from long term capital gains.
Implementation: Buy shares in some higher risk small caps, and LEAPS in companies I believe in long term
Management: Review holdings monthly/quarterly to see if I believe this still has room to grow and add/reduce position accordingly.
BPu: 5-15%
#4. Equity Options
Reason: Profit from volatility and swing trades. I've had decent luck with my ticker and strike picking thus far.
Implementation: Implement a few strategies I am familiar with and comfortable managing:
Put financed call debit spread: Very bullish slightly short vega
Management: Close at +5
Hi Everyone,
We have created a website that lets you optimize portfolio allocations. The Optimization is based on MVO which finds an allocation with Max Sharpe ratio based on historical data.
Webiste: https://www.finverse.in/
Some features of the website
Create portfolios of stocks, Mutual Funds or ETFs. We currently support 1500+ stocks and ETFs (NSE) and 2000+ mutual funds. PPF, FD and Crypto coming soon
Backtest your allocation from 2001 and compare with benchmark (currently only Nifty - more benchmarks to be added soon) and Max sharpe ratio allocation
View detailed metrics like shapre ratio, max drawdown, drawdown plot, annual returns and many more
Save your backtested portfolios and edit them later
View Correlation between assets in your portfolio
View Trending posts on our favourite sub - ISB!
Read latest financial news
More optimizers and features coming soon!
If you have any suggestion or feedback please let me know! Would love to hear from this group
Hey guys just wanted to introduce a project my team and I have been working hard on for the past year. Itβs called Simpli Finance and put Simpli itβs a tool that can be used for portfolio optimization and maximizing returns across multiple assets and yield farms. So how does all this work?
First off, Simpli uses historical data to calculate the expected returns on each asset, and automatically determines the optimal allocations (e.g. 20% usdc/usdt lp, 30% eth/bnb lp, etc.) using innovative artificial intelligence techniques. The tool allows you to control the amount of risk youβre willing to take and does its best to give you maximal yields within your level of risk tolerance.
So all this might sound cool and all but you might be thinking, why do we even need a tool like simpli? Well one thing to consider is that the APR/APY you see on various farms like pancakeswap/pancake bunny arenβt the actual returns that you will receive since there are things like impermanent loss as well as price depreciation of the various tokens that are not taken into account.
Check out the graph below for example, the expected returns are calculated using our algorithm, which takes into account historical data such as price changes, changes in APR, as well as impermanent loss. We can see that it paints a very different picture from what weβre used to seeing in those various yield farms.
How impermanent loss can affect your gains
With simpli you don't have to worry about any of that as our tool will handle it for you. Right now we have only released the tool which is still in beta, so feel free to report any bugs! However there may be a token in the future, so please stay tuned! If you have any questions feel free to ask them here or in our telegram channel https://t.me/SimpliFinanceLab_CH. You can also visit our website https://simplifinance.io to learn more!
Hi everyone!
I'm working on portfolio optimization using Markowitz model in Excel. Is there some package in R built for working with this optimization model?
Thanks!
Has anyone here worked with "portfolio optimization theory" before?
https://en.wikipedia.org/wiki/Portfolio_optimization
Does anyone know if portfolio optimization theory can be applied outside of its intended use in finance? Can it ever be used in contexts which are more closer to "supervised learning"?
For instance, here is a problem I thought of:
Suppose there is a car repair shop and 5 mechanics work there. Everyday, new cars arrive at the shop and each mechanic has to choose 3 cars to work on. In short, the mechanics ideally want to choose cars that they think will be both :
- Easy to work on (i.e. require fewer hours)
- Pay them well (e.g. suppose the mechanics are paid 50% of the total price the customer pays)
The only problem is: the mechanics have no idea how much time any given car will require for repair (let's assume that no one knows this information exactly), nor do they know the amount of money the customers were charged (e.g. let's assume that the owner of the repair shop and the owner of the car negotiate the price in private). When making a decision on which cars to work on, the mechanics only have access to the following information:
- Total Mileage each car has driven
- Price that the customer originally purchased the car for
However, the mechanics have access to historical data. The mechanics have a dataset that contains all 4 of these variables - for all cars that all mechanics at this shop have serviced since the shop has opened, they have: Total Mileage, Original Price of Car, Number of Hours that were required (can consider this as a "supervised label"), Total Bill that the customer was charged (can consider this as a "supervised label").
On first glance, this problem sort of looks like the "Knapsack Optimization Problem" (https://en.wikipedia.org/wiki/Knapsack_problem) - however, in the "Knapsack Problem", we know in advance the "value and cost" (i.e. the "labels") of each potential item we would like to consider for the knapsack. In this car mechanic problem, we do not know the "labels" - information that will eventually be used for defining/calculating the costs and utility function.
Question: Can the mechanics train two separate supervised models (e.g. regression, random forest) on the data that they have, e.g.
Model 1: hours_car_requires = f(mileage, original_price)
Model 2 : total_bill = g(mil
... keep reading on reddit β‘I am a currently working as a model validator in risk (banking) and I would like to start studying about portfolio optimization, which we do not work with. I have in the past studied historical approaches such as Markowitz and concepts like Sharpe ratio (during my studies, but I am rusty now), but I am looking for something more advanced and more recent. Any recommendations for a person who has worked as a quant in risk for about 2.5 yrs?
Hi people, I write this post to share a paper I made that generalizes the Kelly criterion for portfolio optimization. The link of the paper is here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3833617 . The image below are the main models of the paper.
https://preview.redd.it/d5zjz22z41x61.jpg?width=1280&format=pjpg&auto=webp&s=d0f687941f80b165eccac223570d8f888cdd176c
Recently on this forum lots of questions have been surfacing around what folks aspiring to become fatFIRE or already are fatFIRE should invest in.
Questions often come up in the context of:
Although I'm about 15 years past my MBA/CFA classes covering the topic (I stopped the CFA after passing level 1 as my career took a different path), I figured a good over-simplified 101 on modern portfolio theory and rough optimal portfolio construction guide could help folks figure out how investment professionals solve this problem.
What is portfolio theory?
Start with a quick read of https://en.m.wikipedia.org/wiki/Modern_portfolio_theory as I'd otherwise do a crappy job of plagiarism here.
The quick TLDR is: there is a bunch of stuff you can invest in. You'll always go with investments that offer the highest return for a fixed level of risk, but you should figure out what level of risk you can sleep at night with, then construct the portfolio that gets the best return for that level of risk.
How do you construct your optimal portfolio of investments?
Step 1: Identify your pool of potential investment alternatives and roughly identify the level of risk and return for each.
Investment options may include Company stock purchased at a discount, mutual funds/index funds, specific stocks, treasuries, business investment opportunities, and returns should be calculated net of taxes.
Examples:
Step 2: Identify what level of overall portfolio risk you'd be able to sleep at night with. Typically younger investors with fewer obligations and safety nets are ok with more risk, while folks late in their 80s want to make sure the funds can cover medical costs late in life and might not deviate much from treasuri
... keep reading on reddit β‘I'm calculating portfolio weights using Mean-Variance Optimization. What is the time horizon that the optimized weights are good for?
MVO uses a covariance matrix created from historical price activity. Does the length of time used for the historical input data correspond to an equal forward-looking time horizon? For example if I use daily historical data from the previous 3 years, does this mean that the weights provided from the Mean-Variance Optimization are meant for a 3-year forward looking time horizon?
Data scientist here who has been dabbling in trading and quantitative finance for about two years. Currently taking the "Topics in Mathematics with Applications in Finance 18.S096" course on MIT Open Courseware and after being inspired from some of the lectures, I decided to develop an open source library for portfolio optimization and see whether the portfolios can beat the markets.
I have named the toolkit Eiten. It is developed in python and uses several strategies including a custom genetic algorithm implementation. It takes as input a list of stocks and some other parameters and builds portfolios from the implemented strategies. Next, the strategies are back and forward tested and results are all shown to the user. Here are the 4 strategies implemented right now.
Here is the GitHub link: https://github.com/tradytics/eiten
I talked to the mods about posting this here, and they recommended that I post a self-sufficient explanation of the tool here where not everyone has to go to the GitHub link to see how to use the tool, so here we go.
The tool expects a list of stocks in a file format and a comparison market index. It also accepts a bunch of parameters on bar size, historical data size, future data size (if you want to forward test) and a few other parameters. It then builds portfolios using all strategies, back and forward tests them, and then simulates the portfolio returns using monte carlo. Here is the command that does it all:
python portfolio_manager.py --is_test 1 --future_bars 90 --data_granularity_minutes 3600 --history_to_use all --apply_noise_filtering 1 --market_index QQQ --only_long 1 --eigen_portfolio_number 3 --stocks_file_path stocks/stocks.txt
The list of stocks we are using are AAPL, FB, NFLX, SQQQ, TSLA, MSFT, AMZN, AMD, NVDA. This command uses 5 years of historical daily data up till April 29, 2020 and buils a long only portfolio which is then tested on the last 4-5 moths.
All command line arguments are self-explanatory, so I would not explain them here. Instead, let us see the results. First, the tool builds portfolios and plots their weight. Since we are long only, we will change the negative weights to zero during the testing phase.
[Portfolio Weights for Different Strategies](https://pre
... keep reading on reddit β‘Has anyone here worked with "portfolio optimization theory" before?
https://en.wikipedia.org/wiki/Portfolio_optimization
Does anyone know if portfolio optimization theory can be applied outside of its intended use in finance? Can it ever be used in contexts which are more closer to "supervised learning"?
For instance, here is a problem I thought of:
Suppose there is a car repair shop and 5 mechanics work there. Everyday, new cars arrive at the shop and each mechanic has to choose 3 cars to work on. In short, the mechanics ideally want to choose cars that they think will be both :
- Easy to work on (i.e. require fewer hours)
- Pay them well (e.g. suppose the mechanics are paid 50% of the total price the customer pays)
The only problem is: the mechanics have no idea how much time any given car will require for repair (let's assume that no one knows this information exactly), nor do they know the amount of money the customers were charged (e.g. let's assume that the owner of the repair shop and the owner of the car negotiate the price in private). When making a decision on which cars to work on, the mechanics only have access to the following information:
- Total Mileage each car has driven
- Price that the customer originally purchased the car for
However, the mechanics have access to historical data. The mechanics have a dataset that contains all 4 of these variables - for all cars that all mechanics at this shop have serviced since the shop has opened, they have: Total Mileage, Original Price of Car, Number of Hours that were required (can consider this as a "supervised label"), Total Bill that the customer was charged (can consider this as a "supervised label").
On first glance, this problem sort of looks like the "Knapsack Optimization Problem" (https://en.wikipedia.org/wiki/Knapsack_problem) - however, in the "Knapsack Problem", we know in advance the "value and cost" (i.e. the "labels") of each potential item we would like to consider for the knapsack. In this car mechanic problem, we do not know the "labels" - information that will eventually be used for defining/calculating the costs and utility function.
Question: Can the mechanics train two separate supervised models (e.g. regression, random forest) on the data that they have, e.g.
Model 1: hours_car_requires = f(mileage, original_price)
Model 2 : total_bill = g(mil
... keep reading on reddit β‘Has anyone here worked with "portfolio optimization theory" before?
https://en.wikipedia.org/wiki/Portfolio_optimization
Does anyone know if portfolio optimization theory can be applied outside of its intended use in finance? Can it ever be used in contexts which are more closer to "supervised learning"?
For instance, here is a problem I thought of:
Suppose there is a car repair shop and 5 mechanics work there. Everyday, new cars arrive at the shop and each mechanic has to choose 3 cars to work on. In short, the mechanics ideally want to choose cars that they think will be both :
Easy to work on (i.e. require fewer hours)
Pay them well (e.g. suppose the mechanics are paid 50% of the total price the customer pays)
The only problem is: the mechanics have no idea how much time any given car will require for repair (let's assume that no one knows this information exactly), nor do they know the amount of money the customers were charged (e.g. let's assume that the owner of the repair shop and the owner of the car negotiate the price in private). When making a decision on which cars to work on, the mechanics only have access to the following information:
Total Mileage each car has driven
Price that the customer originally purchased the car for
However, the mechanics have access to historical data. The mechanics have a dataset that contains all 4 of these variables - for all cars that all mechanics at this shop have serviced since the shop has opened, they have: Total Mileage, Original Price of Car, Number of Hours that were required (can consider this as a "supervised label"), Total Bill that the customer was charged (can consider this as a "supervised label").
On first glance, this problem sort of looks like the "Knapsack Optimization Problem" (https://en.wikipedia.org/wiki/Knapsack_problem) - however, in the "Knapsack Problem", we know in advance the "value and cost" (i.e. the "labels") of each potential item we would like to consider for the knapsack. In this car mechanic problem, we do not know the "labels" - information that will eventually be used for defining/calculating the costs and utility function.
Question: Can the mechanics train two separate supervised models (e.g. regression, random forest) on the data that they have, e.g.
Model 1: hours_car_requires = f(mileage, original_price)
Model 2 : total_bill = g(mileage, original_price)
Then, if these models are able to perform well on the training data - they can then use them to pr
... keep reading on reddit β‘Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.