A list of puns related to "Audio Frequency"
Hi!
I've been having issues with crackling noise when playing Apple TV movies on my new M1 pro Macbook pro when having the movies in fullscreen. Playing them non-fullscreen was fine, but as soon as they were fullscreen, a lot of static/crackling noise coming out of the speakers.
Lot of googling ensues and I am seeing a lot of people complaining about crackling noises in the new M1 macbook speakers and lot of support threads on apple support channels but no solution. More googling brought me to a macrumors forum thread where someone suggested to change the output frequency to 48 000hz instead of the default 44 100hz. This fixed it for me and other users! No downside found yet, tried multiple videos, streaming platforms, etc..., all still work fine!
So, the fix is simple. Open the app "Audio MIDI Setup" (installed by default, inside utilities folder, or just type "midi" in spotlight), make sure Macbook speakers are selected on the left and pick 48 000Hz in the dropdown menu. And voila, fixed!
adding the link to the forum thread as to not take credit for the solution, just trying to make public this solution https://forums.macrumors.com/threads/weird-warbly-crackling-bizarre-audio-in-appletv-after-monterey-12-1.2327391/
I learnt in both my Calc 2 and Electrical Engineering Intro class that a Fourier transform is used to decompose a signal into its "sub-signals" and even weight them (biggest influence on the signal has the largest spike in the FT graph).
It occurred to me that an FR graph does the same sort of thing, where it shows the weight of a given frequency on the the entire audio waveform.
Is this mathematically accurate?
If so, it's absurdly impressive just how many applications an FT has, such a nifty piece of math.
Sorry if this has been asked before (couldn't find any info or past questions) or if it's a stupid question because of some fundamental misunderstanding of audio/physics.
Is it possible and are there any utilities to "invert" the frequency content of a piece of audio, high to low? Like mapping the amplitude of anything at 20Hz to 20000Hz, 21Hz to 19999Hz, 22Hz to 19998Hz, etc, and vice versa? It probably wouldn't sound very musical in a lot of instances but I can imagine it would make for extremely cool sound design opportunities.
However, it's probably not feasible, right? I would assume due to the sheer amount of processing power required to change every divided part of a single sample to its "inverse", tens of thousands of times a second. (E: Well, certainly it couldn't be done in real-time, unless you had a really low sample rate, but maybe it could be achieved as a slow rendered effect?)
If this has been pondered on before (I'm sure I'm not the first), is there anywhere I can find more info on it? Seems like an interesting idea to me.
How is the frequency of spoken word typically changed? I attempted to do this the same way you would any other waveform and mixed it with a complex wave:
x_out = x_in * e^(i Ο t) where Οt is 2Ο * freq/fs * idx.
This gives a complex output which looks great in an FFT. I then saved this to a wav file with just the real part and listened to the audio. The output sounds extremely choppy. I think this means that there are phase discontinuities but can't really understand where.
Is there an issue with my math or method here? What is the normal way of changing the frequency of spoken audio?
For context i'm using one by apogee as my audio interface, I noticed certain details are more noticeable (such as drums or bass being louder) when I listen through my audio interface compared to when I have my headphones plugged in directly to my computer. Just trying to understand if my audio interface is boosting certain frequencies and therefore not ideal for mixing or if it's just amplifying everything and i'm just noticing certain details because it's louder?
My situation: using a Marantz PM6006 which only has speaker-level outputs to drive two bookshelf speakers. I want to add a subwoofer to this setup - the one I have avaiable has only line-level input.
As I understand, speaker-level to line-level conversion can be accomplished with a simple voltage divider made with power resistors.
Of course doing just that would have my speakers still try to reproduce the full range instead delegating the lows to the sub. Subwoofers with high level input usually do the crossover themselves, offering outputs which continue to the speakers high-passed. So I was thinking to just have a set of simple resistor-inductor frequency filters to split the frequency bands outside of the subwoofer.
In effect my setup would look something like this:
ββββββββββββ βββββββββββββ
β Speaker β β β
βββΊβ to ββββββΊβSubwoofer β
β β Line lvl β β β
ββββββββββββββββ ββββββββββββββ β βconversionβ β β
β β β Freq β β ββββββββββββ βββββββββββββ
β β speaker β Crossfade β β
β Amplifier βββββββββββΊβ circuit β β βββββββββββββ
β β level β ββββ΄βββββββββββββββββββΊβ β
β β β β β Speakers β
ββββββββββββββββ ββββββββββββββ β β
β β
βββββββββββββ
The question I have is - can those two taks (frequency band splitting and speaker to line level conversion) really be performed well by those simple circuits? I probably wouldn't make them myself as there is a ton of cheap converters that do those things (meant mostly for the automotive audio sector). The thing I'm worried about is potential loss in audio quality - I'm not sure if home audio equipment would usually use slightly more sophisticated circuitry to accomplish those tasks. So my main question is: is there much audio quality degradation I should expect if I pursue my plan including simple circuits to do line level conversion and frequency crossfade?
I use a Behringer UMC 204 HD audio interface for my XLR headset. When I hot-plug it on a running system, it works fine. But when I leave my interface connected and wake-up from suspend, it also works but somehow the audio comes through at a lower frequency. For video-calls the voices sound deeper/ower, but the audio stays in sync, so I assume some bits are dropped. For audio like Spotify which seems to have a more direct interface with the interface it seems the whole track is played slower, so I assume no bits are dropped. This is all on Ubuntu 20.04 LTS with PulseAudio, nothing with Jack or Pipewire.
So my guess it that somehow the UMC 204HD is being used at a lower clock-rate. So my questions:
I got two pairs of Sennheiser EW 100 G4 transmitter/receivers. The transmitters are connected to sanken cos-11d lavaliers, receivers into input 1 and 2 of my zoom f6.
On the receiver, they are both in the A1 frequency range by the way, I scan for the best frequency and then sync my transmitter to it.
No matter what frequency I use, I can hear this fucking terrible noise which I can only describe as signal interference
Whatβs the best frequency range for New York? Did I mess up and get the wrong band? Is A1 no good?
Specifically A1 is 470 to 516
Iβm so fucking frustrating lol. I even walked around my house to see if it went away, no luck. Also, phantom power was not enabled for any input. Figured I should mention.
I tried using the Sennheiser frequency finder, but I donβt think it exists anymore. Itβs now called SIFA, and just lists all the frequency bands that are legal to use, instead of giving you the option to search by ZIP Code for the optimal band.
Can any sound people in New York chime in with some support. I am an indie filmmaker and I spent the last 45 minutes googling this shit and two hours troubleshooting it with my set up and Iβm so beyond frustrated.
Thanks in advance for any insight or support.
My zip is 11374 if that helps (queens, ny)
In the fighting game killer instinct one of the characters produces high pitch noises to trigger an action figure and you can't turn it off. It's supposed to be unhearable but some people can hear it with certain setups. I can't play that character against my friend because of that so a program that cuts out sounds of x hz would be great.
These audio bugs need to be even higher priority than they have been, it's insane. Not like a bug report even helps since they just happen sporadically every 20 matches or so with no discernible rhyme or reason and have been for forever at this point.
My super basic understanding is that when sound is digitized, the sample rate has to be double the source frequency, so the frequency is doubled. And the reverse happens when it's turned back into an analog signal.
So in other words, the DAC in my computer takes the 44.1 KHz audio from a Blu-ray and sends 22.05 KHz analog signal to my headphones. I checked and my headphones have a frequency response of 15 - 28,000 Hz, therefore all of the sound will be reproduced (whether I can hear it all or not).
Is this correct?
How exactly do frequency analyzers take a waveform and calculate the frequencies that make it up? For example, the plugin SPAN by Voxengo takes an audio input and calculates the frequencies that make it up. How does it do this?
so I am aware that Trackmania works only with CD audio but the default frequency option in the setting is unchangeable for me and I've tried many things... I just can't change it and I've come to the conclusion that the only way I'm going to get audio is if there is some other way of getting around this issue but I don't know of anything so I was wondering if people here would know
Hi all, as the title says I'm looking for an amp that can handle audio frequency equivalent to microwaves, and able to diffuse ultra speed power technical djent through the hyperspace for interdimensional aliens that have n-dimensional ears capable to enjoy such avant-garde music. If anyone has any leads please let me know.
What I'm trying to do, is output a file containing pulse-frequency modulated pulses, from an input file containing WAV audio (open to using other FOSS tools). I want the 450MHz signal to pulse at full power for a specified period of time (say 100 microseconds), every time a peak is detected in the WAV file (which may also require resampling). I've looked through the available core blocks in GRC, and this flow graph contains the ones that seem to be relevant: https://imgur.com/a/WWtB7eM
What is the best approach to write PFM to a file, to be transmitted by a HackRF later?
This writeup about using PWM to play audio is close to what I'm after.
I am wanting to drive a ribbon tweeter (Fountek neo x) from 30 KHz to ~60KHz what sort of amplifier topologies should I be researching, also are there any gotcha's that you can think of that will need to designed around? Thanks for any advice/literature you can point out or help you can offer in advance.
I made a project using TMRpcm library and an SD card. Iβm getting a high frequency squeal on my audio signal that starts after I start the playback and doesnβt stop even after the sound file is finished playing. Iβm assuming itβs some kind of electrical interference from my wires but I have no idea, since I only get noise after the playback in the code starts. Is it possible to filter out the high frequency noise with an easy component? My sound file is a very bassy noise so I wouldnβt mind.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.