Scheduled Tasks
Turn your scripts into automated background services. Put your Python logic on autopilot and run code even while you're offline.
Your Scripts on Autopilot
Python Online provides a built-in scheduler that allows you to execute your code at regular intervals without manual intervention. This is a set-and-forget system: once a task is scheduled, our infrastructure handles the execution, logging, and lifecycle management automatically.
Common Use Cases
- Data Scraping: Collect news, prices, or stock data every hour.
- Automated Reporting: Process datasets and generate a daily summary file.
- Database Maintenance: Clean up temporary records or sync data between APIs.
- System Checks: Monitor an external website or service and log its availability.
Creating a Scheduled Task
Automating a script is managed through the Tasks Tab in the Project Dashboard. To schedule a job, follow these steps:
- Identify the Entry Point: Ensure the script you want to automate is saved in your project (e.g.,
scripts/scraper.py). - Open the Dashboard: Click your project name in the header and navigate to "Tasks."
- Click Schedule (+):
- Task Name: Give your job a descriptive label (e.g., "Daily CSV Backup").
- Script Path: Provide the path relative to your project root. (Example:
src/main.py). - Frequency: Select how often the script should execute.
- Save: The task is now registered and will start its first run immediately, and will follow the scheduled interval.
Independent Execution: Scheduled tasks are treated as independent workloads. A task belonging to "Project A" will run on its schedule even if you are actively working inside "Project B" or have the IDE closed entirely.
Tiered Execution Environments
How your task executes depends on your tier, using different architectural lifecycles to balance performance and persistence.
Free Tier: The Ephemeral Task
Free users are permitted to schedule 1 Daily Task. When the scheduler triggers, the platform boots a brand new, temporary Linux container exclusively for that task.
- Clean Slate: The container has fresh memory. No variables from previous runs carry over.
- The 60-Second Wall: To ensure fairness across the cluster, Free tier tasks are granted a strict 60-second execution window. If your script takes longer, the server will forcefully terminate the container to reclaim resources.
Pro Tier: The Isolated Task
Pro users are permitted to schedule Unlimited Tasks with Hourly or Daily frequencies.
- Isolated Container: Instead of running inside your main IDE, tasks are spawned in their own disposable container with a dedicated slice of your hardware (0.25 vCPU / 512MB RAM). This ensures that a heavy background task can never slow down or crash your interactive coding session.
- Data Parity: The task container mounts your project's cloud storage, giving it full read/write access to your files and installed
.pypackages, just like the IDE. - No Concurrency: To guarantee predictable resource usage, the system enforces a "one task at a time" rule. If Task B is scheduled to run while Task A is still processing, Task B will be safely queued until Task A completes.
- Extended Timeouts: Pro tasks are allowed to run for up to 1 Hour before the system safety net intervenes. This provides ample time for complex data modeling or massive database migrations.
- The Coroner: Pro tasks run completely detached in the background. When they finish, the container enters an "Exited" state. A background service (The Coroner) sweeps the server, extracts the logs, updates the dashboard status, deletes the container, and returns the 512MB RAM slice to your IDE workstation. This guarantees your tasks survive backend reboots flawlessly.
Understanding Frequency & Limits
Platform capacity is allocated based on your subscription tier:
| Capability | Free Tier | Pro Tier |
|---|---|---|
| Tasks | 1 Task | Unlimited |
| Frequency | Daily | Hourly or Daily |
| Timeout | 60 Seconds | 1 Hour |
The Headless Standard
It is important to understand that scheduled tasks run in Headless Mode. Because there is no user present during execution, the environment is strictly non-interactive.
Warning: If your code contains the input() function, the execution engine will gracefully bypass the prompt and continue execution to prevent the script from hanging indefinitely.
Similarly, any attempt to render an interactive plot using matplotlib.pyplot.show() will be intercepted and discarded, as there is no visual console to display it.
Monitoring & Success Tracking
Since you aren't watching the code run, Python Online provides tools to verify that your automation is healthy.
Real-Time Status
The Tasks Dashboard displays a status indicator for every job:
- Success: The script finished with an exit code of 0.
- Failed: The script crashed or hit the timeout limit.
- Running: The task is currently executing.
Persistent Task Logs
Every task maintains a History Log of its most recent execution. By clicking the "Log" icon, you can see the full stdout and stderr of the run. This includes all your print() statements and, most importantly, the Python traceback if the script failed.
Note: To keep your storage clean, logs are overwritten with each new run.
Best Practices for Background Scripts
- Relative Paths: Always reference files using relative paths. Your script is executed with the project root as the working directory. Use
open('data/output.txt', 'w')rather than absolute paths. - Robust Error Handling: Use
try/exceptblocks around your main logic. If a network error occurs during a scrape, catching the error and printing a custom message will make your logs much easier to debug. - Resource Efficiency: Avoid infinite loops. Even Pro tasks are subject to a 1-hour "Hard Kill" timer to prevent runaway processes from consuming your workstation's RAM.