• 0 Posts
  • 144 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • This is real time and based from one image.

    Deep fakes up to this point are generally not real time, and are generally trained on the source, then with different methods can be applied to the video. Say, making Kermit the Frog doing a dance as the final video, but it’s been deep faked to look like Ms. piggy.

    There are tons of examples of AI that post process deep fakes. This is one of the few real time ones that you can link to a webcam, have a single photo, and you are the deepfake.

    From my understanding, that hasn’t been done yet, at least not in the AI spaces I’ve been part of.


  • Seconding reafir, or really any audio silencing plugin.

    Record silence for 5 - 8 seconds, turn on FX, set to subtract and then playback the silence with the checkbox.

    You’ll see the frequency range it takes up. In some cases this can affect your source audio, for example if the clicking sound is in the same range as a higher pitched humans voice, they may become warbled or inaudible.

    This can be done to take out car whooshing/air to some extent, and general background hums from line input or gain noise or fans.



  • averyminya@beehaw.orgtoTechnology@lemmy.mlCut the 'AI' bullshit
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    7 days ago

    We’ve had the tech to drastically cut power consumption for a few years now, it’s just about adapting the existing hardware to include the tech.

    There’s a company MythicAI which found that using analog computers (ones built specifically to soft through .CKPT models, for example) drastically cuts down energy usage, is consistently 98-99% accurate, simply by taking a digital request call, converting it to an analog signal, the signal is processed then converted back to a digital signal and set to the computer to finish the task.

    In my experience, AI is only drawing 350+ watts when it is sifting through the model, it ramps up and ramps down consistently based on when the GPU is utilizing the CUDA cores and VRAM, which are when the program is processing an image or the text response (Stable Diffusion and KoboldAI). Outside of that, you can keep stable diffusion open all day idle and power draw is marginally higher, if it even is.

    So according to MythicAI, the groundwork is there. Computers just need an analog computer attachment that remove the workload from the GPU.

    The thing is… I’m not sure how popular it will become. 1) these aren’t widely available and you have to order them from the company and get a quote. Who knows if you can only order one. 2) if you do get one, it’s likely not just going to pop into most basic users Windows install running Stable Diffusion, it’s probably expecting server grade hardware (which is where the majority of the power consumption comes from, so good for business but consumer availability would be nice). And, most importantly, 3), NVIDIA has sunk so much money into GPU powered AI. If throwing 1,000 watts at CUDA doesn’t keep making strides, they may try to obfuscate this competition. NVIDIA has a lot of money riding on the AI wave and if word gets out that some other company can cut costs of development both in cost of hardware and cost of running it, and the need for multiple 4090s or whatever is best and you get more efficiency from accuracy per watt.

    Oh, and 4) MythicAI is specifically geared towards real time camera AI tracking, so they’re likely an evil surveillance company and also the hardware itself isn’t explicitly geared towards all around AI, but specific models built in mind. It isn’t inherently an issue, it just circles back to point 2) where it’s not just the hardware running it that will be a hassle, but the models themselves too.





  • Lidarr is all you need.

    You can do other methods, but this one is simple and effective. Set it up, tell it what bands you like, wait a day and you’ve got the entirety of your childhood favorites and nearly every discography you can think of. All for maybe an hour of upfront work.

    Versus remembering every band/song you ever liked, tracking it down, downloading each individually… Like yeah, you can do that. It’s what I do for shows and movies for curation. But for music, I have so much and so many that curating like this just isn’t as worthwhile as checking off a band in Lidarr and having all of their stuff in a few hours.



  • Google pays a lot to stay the default browser.

    The other search engines mostly use overlapping indexes.

    Said search engines are also not anywhere near competition to Google.

    Quite frankly, I can only think of 4. DDG, Ecosia, Bing, and Kagi.

    Most people don’t know about Ecosia or Kagi. Most people hardly even know about DDG.

    I wouldn’t consider YouTube as much of a monopoly because despite it being mostly the only one, from what I understand they haven’t paid out to stay the only one, and don’t really leverage market dominance against others (they probably do but I just don’t hear about it often.) The main reason alternatives don’t exist is simply because of the mass amount of data the YT needs



  • Gait is walking stance/personality of walking. So if someone walks with a limp, favors one side, slumps a shoulder, etc. Everyone’s is different, so it can be tracked.

    This is very extreme surveillance avoidance, though anyone ditching their phone to go to a protest should also think about their gait and where they are coming to and going from. If you’re at this point of fear, you’d want a change of clothes in a bag, and maybe even different shoes. In like, 98% of U.S. situations though this doesn’t apply to us or our protests, it is just good to know. The night I’m talking about though and one other, it was honestly necessary.

    Other than extreme circumstances though, these are tactics that only people who have legitimate reason to be paranoid. Such as Boeing whistleblowers, or journalists.



  • I mean, masking during Covid while there were a lot of protesting events going on was a common crossover. Protect your identity and your health.

    Unfortunately these spies do not need just faces for recognition. Gait is analyzed as well, among other things, so I’m not sure how effective it really is.

    Side note: if government surveillance wants eyes on us, and masks prevent surveillance, would they have pushed masking for Covid? I don’t really feel one way or the other on it, just a conspiracy curiosity.