Streamlit File Uploader: How to Use st.file_uploader
Updated on
st.file_uploader is Streamlit's built-in upload widget for bringing user files into an app.
If your app needs people to upload CSVs, images, PDFs, or batches of documents before you analyze or process them, this is the feature that makes that workflow possible.
In plain terms, st.file_uploader is the bridge between a user's local files and your Streamlit code. Instead of hardcoding a file path on your machine, you let the user provide the file directly from the browser.
Why file upload matters in Streamlit
File upload is one of the features that turns a Streamlit script into a useful application.
Without uploads, many apps stay demo-like. With uploads, you can build:
- CSV analysis tools
- image review tools
- document processing apps
- data-cleaning utilities
- lightweight internal tools for non-technical users
After reading this guide, you should be able to:
- accept one file, many files, or a directory
- read uploaded files correctly in Python
- validate uploads before processing them
- handle common mistakes around file size, buffering, and reruns
Quick Answer
Use st.file_uploader when you want users to upload one file, multiple files, or a whole directory into a Streamlit app.
- single file: default behavior
- multiple files:
accept_multiple_files=True - directory upload:
accept_multiple_files="directory" - per-widget size cap:
max_upload_size
Quick Start
Upload one CSV file
import pandas as pd
import streamlit as st
uploaded_file = st.file_uploader("Upload a CSV file", type=["csv"])
if uploaded_file is not None:
df = pd.read_csv(uploaded_file)
st.dataframe(df)An uploaded file is returned as an UploadedFile object, which behaves like a file-like buffer. That means many Python libraries can read it directly.
What st.file_uploader helps you do
The widget solves a simple but important problem: it lets the app work with user-provided inputs instead of only developer-provided local files.
That is useful when:
- each user has their own dataset
- the app should process documents on demand
- you want to preview user content before analysis
- you are building a no-code or low-code interface on top of Python logic
In practice, it is often the starting point for the rest of the app's workflow.
Core arguments you should know
| Argument | What it does |
|---|---|
label | Text shown above the uploader |
type | Restricts allowed file extensions |
accept_multiple_files | Accepts one file, many files, or a directory upload |
key | Stable widget identity |
help | Tooltip for extra guidance |
on_change | Callback that runs when the selection changes |
disabled | Prevents interaction |
label_visibility | Shows or hides the label |
max_upload_size | Limits file size for this widget |
width | Controls layout width |
Single-file upload patterns
Read text or JSON
import json
import streamlit as st
uploaded_file = st.file_uploader("Upload JSON", type=["json"])
if uploaded_file is not None:
payload = json.load(uploaded_file)
st.write(payload)Read an image
import streamlit as st
from PIL import Image
uploaded_file = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"])
if uploaded_file is not None:
image = Image.open(uploaded_file)
st.image(image, caption=uploaded_file.name)Upload multiple files
Set accept_multiple_files=True to receive a list.
import pandas as pd
import streamlit as st
uploaded_files = st.file_uploader(
"Upload one or more CSV files",
type=["csv"],
accept_multiple_files=True,
)
for uploaded_file in uploaded_files:
df = pd.read_csv(uploaded_file)
st.subheader(uploaded_file.name)
st.dataframe(df.head())Upload a directory
Modern Streamlit also supports directory uploads through accept_multiple_files="directory".
import streamlit as st
uploaded_files = st.file_uploader(
"Upload a folder of images",
type=["png", "jpg", "jpeg"],
accept_multiple_files="directory",
)
for uploaded_file in uploaded_files:
st.write(uploaded_file.name)This is useful for image batches, local datasets, and document collections that users already keep in folders.
Working with uploaded files
Read raw bytes
import streamlit as st
uploaded_file = st.file_uploader("Upload a binary file")
if uploaded_file is not None:
raw_bytes = uploaded_file.getvalue()
st.write(f"Read {len(raw_bytes)} bytes")Convert text content
import io
import streamlit as st
uploaded_file = st.file_uploader("Upload a text file", type=["txt"])
if uploaded_file is not None:
text = io.StringIO(uploaded_file.getvalue().decode("utf-8")).read()
st.text(text[:500])Save to a temporary file for downstream libraries
Some libraries want a file path rather than a file-like object.
import tempfile
import streamlit as st
uploaded_file = st.file_uploader("Upload a PDF", type=["pdf"])
if uploaded_file is not None:
with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp:
tmp.write(uploaded_file.getbuffer())
temp_path = tmp.name
st.write("Temporary path:", temp_path)File size limits
Use max_upload_size for a per-widget limit:
import streamlit as st
uploaded_file = st.file_uploader(
"Upload a model artifact",
type=["pkl", "joblib"],
max_upload_size=50, # MB
)If you need a larger app-wide limit, configure it in .streamlit/config.toml:
[server]
maxUploadSize = 500Validation and security
Extension filtering is helpful, but it is not a full security boundary.
You still need real validation in production apps:
- inspect MIME type or file headers when possible
- enforce size limits
- sanitize file names before saving
- parse uploaded content defensively
- never trust unvalidated uploads from users
Example validation:
import pandas as pd
import streamlit as st
uploaded_file = st.file_uploader("Upload sales CSV", type=["csv"])
if uploaded_file is not None:
df = pd.read_csv(uploaded_file)
required = {"region", "revenue", "date"}
if not required.issubset(df.columns):
st.error("Missing required columns.")
else:
st.success("File looks valid.")
st.dataframe(df.head())Use callbacks with uploads
st.file_uploader supports on_change, which is useful when the app should update extra state after file selection changes.
import streamlit as st
st.session_state.setdefault("upload_count", 0)
def mark_upload() -> None:
files = st.session_state.docs
if files is None:
st.session_state.upload_count = 0
elif isinstance(files, list):
st.session_state.upload_count = len(files)
else:
st.session_state.upload_count = 1
st.file_uploader(
"Upload documents",
accept_multiple_files=True,
key="docs",
on_change=mark_upload,
)
st.write("Selected files:", st.session_state.upload_count)What file uploads improve in real apps
File uploads often change the audience for a Streamlit app.
Instead of a tool that only works for the developer's local files, you get an interface that other people can actually use with their own inputs.
That is what makes uploads especially valuable in:
- analyst-facing internal tools
- customer support tools
- QA or review apps
- simple ML demo interfaces
- one-off operational workflows
Common mistakes
1. Treating the return value like a file path
UploadedFile is a file-like object, not a local disk path.
2. Forgetting that multiple uploads return a list
When accept_multiple_files=True or "directory", iterate over the returned list.
3. Relying only on extension filtering
type=["csv"] improves UX, but it does not replace content validation.
4. Trying to control the uploader value through session state
You can track uploads in st.session_state, but the widget itself is button-like and its value is not meant to be set manually via session state assignment.
5. Recreating the widget with a changing key
If the widget key changes between reruns, Streamlit treats it as a new uploader and users lose their selected files.
Troubleshooting
Why does pd.read_csv(uploaded_file) fail?
Check whether:
- the upload is actually a CSV
- the file is empty
- the encoding is unexpected
- you already consumed the file pointer and need to seek back to the start
Why do my uploads disappear?
The uploader is part of the current session. A rerun with a changed widget identity, reload, or session reset can clear the selection.
How do I upload directly to S3 or a database?
First read the file in Streamlit, then hand the bytes or buffer to your storage library such as boto3 or a database client. Streamlit does not upload to S3 for you automatically.
Can I preview an uploaded file before processing it?
Yes. Show a DataFrame head for CSV files, a thumbnail for images, or metadata like file name and size before running heavier logic.
Related Guides
- Streamlit DataFrame
- Streamlit Session State
- How to Run a Streamlit App
- Streamlit Button
- Streamlit Components
Frequently Asked Questions
How do I restrict file types in Streamlit upload?
Use the type argument, such as type=["csv", "xlsx"], to limit which file extensions the uploader accepts in the browser.
How do I limit file size in Streamlit?
Use max_upload_size for a per-widget limit, or set server.maxUploadSize in .streamlit/config.toml for an app-wide limit.
Where does Streamlit store uploaded files?
Streamlit gives your app an in-memory UploadedFile object. If you need the file to persist, save it yourself to disk, object storage, or a database.
How do I delete an uploaded file in Streamlit?
If the file only exists in the uploader, clearing the widget state usually means a reload, session reset, or re-creating the widget with a new key. If you saved the file yourself, you must delete that persistent copy yourself.
Can Streamlit upload multiple files?
Yes. Set accept_multiple_files=True to allow multiple files, or use accept_multiple_files="directory" to upload a directory.