Cory doesn’t miss
Cory doesn’t miss
deleted by creator
Hell yeah. Awesome piece.
Napster
The shared secret with my Vaultwarden server? Add mfa and someone needs to explain to me how passkeys do anything more than saving one single solitary click.
I think you are underestimating the ability of the generative ai tools to build to a consistent style and theme. Sure it’s a problem now but not an insurmountable one by todays measures, so I’m assuming Adobe will have it sorted in a few years too
Literally just use a password manager and 2/MFA. It’s not a problem. We have a solution.
Might try again. It’s come leaps and bounds in the past few years. I’ve been Linux only for the past few years after dual booting for many and the one thing I miss is game pass. Every game I’ve tried on steam or gog works — often better than on windows.
Wus growing obsession with woowoo bs health science is depressing and seems like a trauma response to Covid (or a grifter one).
What do you think obscuring means?
Literally illegal
Not a boat! Not a family vacation! Gasp! An… RV!?!?? SOMEONE THINK OF THE RIGHTSHOLDERS!
404 don’t miss. Best tech journoing in the biz imho
Tech bro knob slobber.
Yeah this is more elegant and closer to what I’d actually want to implement. I was more just showing what could be done in literally thirty seconds on the can with ChatGPT.
“Give me a Python script using yt-dlp that I can run on a cronjob that will download the videos from any of my subscribed channels since the last time the script was run”
You can use the following Python script to download videos from your subscribed channels since the last run. This script uses yt-dlp
and stores the last download timestamp to track new videos.
First, ensure you have yt-dlp
installed:
pip install yt-dlp
Then, create a script called download_videos.py
:
import os
import json
import subprocess
from datetime import datetime
# Configuration
last_run_file = 'last_run.json'
download_directory = 'downloads'
# Ensure the download directory exists
os.makedirs(download_directory, exist_ok=True)
# Load the last run time
if os.path.exists(last_run_file):
with open(last_run_file, 'r') as f:
last_run = json.load(f)['last_run']
else:
last_run = datetime.now().isoformat()
# Update the last run time to now
current_run = datetime.now().isoformat()
# Command to get videos from subscribed channels since the last run
command = [
'yt-dlp',
'--download-archive', 'archive.txt',
'--output', f'{download_directory}/%(title)s.%(ext)s',
'--date-after', last_run,
'--no-post-overwrites',
'--merge-output-format', 'mp4',
'https://www.youtube.com/channel/CHANNEL_ID', # Replace with your channel URL
]
# Run the command
subprocess.run(command)
# Save the current run time
with open(last_run_file, 'w') as f:
json.dump({'last_run': current_run}, f)
print("Download complete. Next run will check for videos since:", current_run)
Make the script executable:
chmod +x download_videos.py
Open your crontab:
crontab -e
Add a line to run the script at your desired interval (e.g., daily at 2 AM):
0 2 * * * /path/to/python /path/to/download_videos.py
CHANNEL_ID
in the script with your actual channel IDs or use a playlist URL if preferred.archive.txt
file keeps track of already downloaded videos to avoid duplicates.Well you know what they say “Great minds think quicker than mine and probably have already had that thought.”
Donate to the EFF for fighting that fight in court already.
Yes but literally throwing together a script to download the days subscription videos to a jellyfin media drive would be stupidly simple.
Jesus man that is brutal sorry to hear you’re going through that.