The "Argument list too long" error occurs when you try to pass too many arguments to a shell command. In Unix-like systems, there's a limit to the number of bytes that can be passed to an executable in the argument list (typically around 128KB-2MB depending on the system). When you use wildcards like *.jpg
that expand to thousands of files, you easily hit this limit.
Many developers first try using xargs
with ls
, but this approach fails because:
ls /path/*.jpg | xargs -I {} cp {} /destination/
The problem persists because ls
itself hits the argument limit before xargs
can process the files.
Method 1: Using find with -exec
The most reliable solution is to use find
which doesn't suffer from argument list limitations:
find /home/ftpuser1/public_html/ftparea/ -name "*.jpg" -exec cp -uf {} /home/ftpuser2/public_html/ftparea/ \;
Method 2: find with xargs
For better performance with very large file sets:
find /home/ftpuser1/public_html/ftparea/ -name "*.jpg" -print0 | xargs -0 -I {} cp -uf {} /home/ftpuser2/public_html/ftparea/
The -print0
and -0
options handle filenames with spaces correctly.
Method 3: Using rsync
For recurring copy operations, rsync
is more efficient:
rsync -avm --include='*.jpg' -f 'hide,! */' /home/ftpuser1/public_html/ftparea/ /home/ftpuser2/public_html/ftparea/
For directories with millions of files:
find
with-exec
is most reliable but slowerfind | xargs
offers better performancersync
is best for repeated operations
If you need more control over the copy process:
# Using a for loop (slower but more flexible)
for file in /home/ftpuser1/public_html/ftparea/*.jpg; do
[ -e "$file" ] || continue
cp -uf "$file" /home/ftpuser2/public_html/ftparea/
done
Remember that the for loop approach will still fail if the wildcard expansion exceeds the argument limit, so it's better suited for smaller file sets.
When dealing with thousands of files in Linux, you might encounter the frustrating "Argument list too long" error. This happens because the system has a limit on how many arguments can be passed to a command through the shell.
The ARG_MAX limit in Linux (typically around 2MB) defines the maximum length of arguments that can be passed to a new program. When you use wildcards like *.jpg
, the shell expands this to all matching filenames before passing to the command.
Method 1: Using find with xargs
find /home/ftpuser1/public_html/ftparea/ -maxdepth 1 -name "*.jpg" -print0 | xargs -0 -I {} cp -uf {} /home/ftpuser2/public_html/ftparea/
Key points:
- -print0
and -0
handle filenames with spaces
- -maxdepth 1
prevents recursion into subdirectories
- Works with tens of thousands of files
Method 2: Using rsync (Best for Large Transfers)
rsync -av --progress /home/ftpuser1/public_html/ftparea/*.jpg /home/ftpuser2/public_html/ftparea/
Advantages:
- Built-in chunking avoids argument limits
- Can resume interrupted transfers
- Shows progress information
Method 3: Using a for Loop
for file in /home/ftpuser1/public_html/ftparea/*.jpg; do
cp -uf "$file" /home/ftpuser2/public_html/ftparea/
done
Note: Slower but works for moderately large file sets
For the fastest performance with millions of files:
cd /home/ftpuser1/public_html/ftparea/
find . -maxdepth 1 -name "*.jpg" -exec cp -uf {} /home/ftpuser2/public_html/ftparea/ \;
This avoids launching cp for each file individually.
When other methods aren't suitable:
ls /home/ftpuser1/public_html/ftparea/ > /tmp/filelist.txt
while read -r file; do
cp -uf "$file" /home/ftpuser2/public_html/ftparea/
done < /tmp/filelist.txt