Difference between revisions of "Selected Linux commands"

From Biowikifarm Metawiki
Jump to: navigation, search
m (Archiving: find . -type f -iname '*' -size +50M -not -iname '*.gz' -exec gzip --verbose '{}' ';')
m (Archiving: find . -type f -iname '*' -size +50M: exclude already compressed gz-files and zip-files)
Line 595: Line 595:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# find files larger than 50MB, but exclude already compressed gz-files, and compress them by gzip in verbose mode
# find files larger than 50MB, but exclude already compressed gz-files and zip-files, and compress them by gzip in verbose mode
find . -type f -iname '*' -size +50M -not -iname '*.gz' -exec gzip --verbose '{}' ';'
find . -type f -iname '*' -size +50M \
  -not -iname '*.gz' \
  -not -iname '*.zip' \
  -exec gzip --verbose '{}' ';'

Revision as of 11:04, 4 November 2021

Tips & first steps

Command-line tricks:

  • type Control-r and start typing to retrieve the last commands you have used before, repeat to input Control-r to get the next matching command(s) that match your typing input
  • use auto-completion via ⇄ (tab) or twice ⇄ (tab):
    • if you have typed an “a” and then ⇄ (tab) it will auto-complete all available a… command line programs
    • ⇄ (tab) works for command line options as well, e.g. grep -- (no space after --) then twice ⇄ (tab), it will show all available options that use two dashes
  • to stop or interrupt a command line process immediately type Control-c
  • file names starting with a - (dash) will cause errors because a "-string" or "--string" triggers a command line option. Safe handling can be achieved by appending a " -- ", e.g.
# ls=listing command
ls *  # not dash-safe handling
ls -- * # dash-safe handling, lists also files with beginning dashes like '-myfile'

Most useful to quickly locate commands, executable is "locate", such as:

 locate memcached # or
 whereis memcached
 apropos ftp # searches for all "ftp" programs

Traditional find files uses:

 find / -name 'file.ext' # "/" to start at root, option -name to search for file names

Linux version numbers:

 sudo cat /etc/debian_version # full Debian version number 
 uname -a # Kernel / Xen version numbers

Tail can work with multiple files at once:

 tail -f /var/log/apache2/access-*.log


Use apt-get or aptitude. Here mostly some special solutions:

Very useful to find the correct package name of software which may not yet be installed. YourSearchTerm can be a regexp, e.g., .*ghost.*

 apt-cache search YourSearchTerm

Which versions are available in the distribution repos: apt-cache policy <packageName>

sudo apt-cache policy nginx

Install specific version:

sudo apt-get install nginx=1.10.3-1~dotdeb+7.1

List all installed packages with version:

sudo dpkg -l

Get help

 apropos ftp # searches for all program descriptions with ftp
 man 7z # manual page for 7z
   # shift + ? → searching a string (by regular expression)
   # shift + n/n → find the next/previous match
   # q → quit

Most commands provide a help option and the following works:

 a-command --help
 a-command -h
 # sometimes there is an info on usage
 a-command --usage

Stop/Start Services

You an check the status before, e.g.: sudo service nginx status


sudo service cron stop
sudo service dropbox stop
sudo service nginx stop
sudo service php5-fpm stop
#sudo service tomcat5.5 stop
sudo service fedora stop
sudo service apache2 stop
#sudo  service memcached stop
sudo service mysql stop
# direct, does not work: sudo /usr/share/fedora/tomcat/bin/shutdown.sh - instead through script:
sudo service webmin stop
sudo service clamav-freshclam stop

to restart (template to copy and use directly):

sudo service clamav-freshclam start
#sudo service memcached start
sudo service mysql start
#sudo service tomcat5.5 start
sudo service fedora start
sudo service apache2 start
sudo service php5-fpm start
sudo service nginx start
# direct, does not work: sudo /usr/share/fedora/tomcat/bin/startup.sh - instead through script:
sudo service webmin start
sudo service cron start
sudo service dropbox start

Fedora may or may not be installed under the main tomcat. As of 2009-08, it can be started/stopped using (fedora folder is a softlink to current installed version; alternatively one can use "$FEDORA_HOME"):

 sudo /usr/share/fedora/tomcat/bin/shutdown.sh
 sudo /usr/share/fedora/tomcat/bin/startup.sh


  • sudo /etc/backupscripts/services-start.sh
  • sudo /etc/backupscripts/services-stop.sh

Disk usage and repair

How much space on disks?

 df -l # diskfree, local disks only
 df -lh # diskfree, human friendly (size in MB, GB, etc.)
 df .  # diskfree current disk

Disk usage = "tree size": where is the space used:

 du -h --max-depth=1 # '''analyze only 1 level of directories deep'''
     # -h = human readable, MB, GB, factor 1024; -si would be factor 1000.
 du -S # do not add up content of folders, keep values Separate (useful for manual analysis)

If file system shows errors when mounting (here /mnt/backup): umount /mnt/backup e2fsck -y /dev/xvdf e2fsck -y /dev/xvdf mount /mnt/backup

Memory Usage

free -h  # display human readable numbers with example output
#              total       used       free     shared    buffers     cached
# Mem:          4,4G       3,4G       964M         0B       178M       967M
# -/+ buffers/cache:       2,3G       2,1G
# Swap:         4,7G       264M       4,4G


You can change owner and group in one step:

chown root:root

For rights, it is often necessary to change the group of a file or folder. It is not necessary to change the owner as well (chown -R).

 ls -l    # will display owner and group names
 ls -g    # will only display group names (easier)
 chgrp -R # R: do it recursively

When copying folder trees, it is easy to loose essential information. Use

 # preserve owner, rights, etc., copy recursively; 
 cp --preserve --recursive /source/path/ /target/path/ # or
 cp -pr /source/path/ /target/path/
 # but devices, sockets, etc. still are not handled, if this is necessary use:
 tar --preserve
 tar -p


Locate does not work, and using Find is tough. Good info: http://content.hccfl.edu/pollock/unix/findcmd.htm Example for find:

# find from the root level
  find / -name index.html
# find from the current directory level
  find ./ -name index.html
# find from the upper directory level
  find ../ -name index.html
# regular expression search:
# find a file with extension .svg and ä-something in the current subdirectory ./e
  find ./e/ -regex '.*ä.*\.svg'
# find via regex using egrep, finds 8 numbers followed by optional _ followed by at least one number to the end of the search string
  # e.g.: /var/www/v-dina/w_20160403_0049
  find /var/www/ -maxdepth 5  -type d -regextype posix-egrep -regex '.*_[0-9]{8,}_?[0-9]+$'
# find “only in this directory here” image file names with regular expression search (case insensitive)
  find . -maxdepth 1 -iregex ".*\(jpg\|jpeg\|png\|gif\|svg\|tif\|tiff\)"
# find an execute something on the fly
  find /var/www/v-species/o/media/0/  -user root -name '*' -exec sudo chown -R  www-data:www-data '{}' ';'
  # tip: twice a -exec ... statement can trigger two commands during the search

# find something inverse or not. E.g. find files but only without "access" and "error"
  find . -type f -not -name "*access*" -and -not -name "*error*"

Searching for links to a folder

# find all symlinks containing the string "mediawiki26wmf"
cd /var/www; find . -lname '*mediawiki26wmf*'
# find all links that have the same given file with some additional print outs
find -L /var/www/ -maxdepth 4 -samefile /usr/share/mediawiki26/index.php -exec dirname '{}' ';' | \
  sort | \
  awk '{ print sprintf("# Wiki %03d is linked -> %s",NR, $0)}'

Above as copyable single line:

sudo find -L /var/www/ -maxdepth 4 -samefile /usr/share/mediawiki26wmf/index.php -exec dirname '{}' ';' |sort|awk '{print sprintf("# Wiki %03d is linked -> %s",NR, $0)}'

Searching by file size

# type f → file, size > 4MB, printf %k → kB, %f → file, %p → path, finally sort the list
  find . -name *.jpg -type f -size +4M -printf "%05k kB %p\n" | sort --numeric-sort
  # 18148 kB ./Spirobassia hirsuta/Spirobassia_hirsuta_4_Schleswig-Holstein_Insel_Röm_Timm_Herb_HBG_(Rolf_Wißkirchen).jpg
  # 18808 kB ./Corispermum marschallii/Corispermum_marschallii_3_Berlin-Moabit_Krüger_Herb_HBG_(Rolf_Wißkirchen).jpg
  # 19888 kB ./Corispermum marschallii/Corispermum_marschallii_4_Schwetzingen_Dürer_Herb_HBG_(Rolf_Wißkirchen).jpg
  find . -name *.jpg -type f -size +4M -printf "%05k kB %f\n" | sort --numeric-sort
  # 18148 kB Spirobassia_hirsuta_4_Schleswig-Holstein_Insel_Röm_Timm_Herb_HBG_(Rolf_Wißkirchen).jpg
  # 18808 kB Corispermum_marschallii_3_Berlin-Moabit_Krüger_Herb_HBG_(Rolf_Wißkirchen).jpg
  # 19888 kB Corispermum_marschallii_4_Schwetzingen_Dürer_Herb_HBG_(Rolf_Wißkirchen).jpg

Search content of files

For searching inside files, use grep [OPTIONS] PATTERN [FILE...]. Common options are:

  • -r or --recursive searches recursive, but only within the file pattern
  • -R or --dereference-recursive read all files under each directory, recursively. Follow all symbolic links
  • -n or --line-number prints the line number the match was found
  • -i or --ignore-case ignores the case of matches
  • -H or --with-filename print the file name

For details type into the console: man grep

# set highlighted colors for grep findings in the current session
# permanently it can be set too in your home directory ~/.profile
  export GREP_OPTIONS='--color=auto'
# find xml in svg files recursively
  grep --include=*.svg -r "xml" .
# find <script> in all files recursively (-r or --recursive) and ignore case (-i or --ignore-case)
  grep --recursive --ignore-case "<script>" *
# with line numbers in the file and combined with find
  # search for file pattern: ".*etting.*.php"; -n → add line numbers; -r → search recursively
  grep --include="*etting*.php" -n -r "smwgDefaultStore" ./
# Result may be:

./extensions/SemanticMediaWiki/SMW_Settings.php:48:# the $smwgDefaultStore.

# open the file at a specific line number position
  nano +30 myfoundfile.php # nano editor
  vi +30 myfoundfile.php # vi-editor

To search in nested folders, one need to combine find and grep. Try:

# find (do not follow symlinks → “-P”, only files “-type f”)
# results of find are executed with grep (show the found line number, file name and prompt with color output)
  find -P . -name "*.php" -type f -exec grep --line-number --with-filename --color=auto 'regexp-search' '{}' \;
# with xargs
  find . -name "*.php" -print | xargs grep --iRnH  "SEARCHTEXT"

Find something within a *.gz or *.tar.gz archive on the fly, this is often required in searching through all kind of log files.

Methods using zgrep which can handle *.gz and normal files: 

  # show the last 5 lines of all mail logs, regardless of being a normal file or *.gz file
  for f in /var/log/mail*; do echo "###### ${f}"; zgrep ".*" "${f}" | tail -n 5;done

  # with find + gunzip + grep together concatenated by \ (line breaks)
  # '{}' or "{}" represent the found file-string from find in -exec syntax and ";" terminates -exec syntax
  find /mnt/dump/var/log/ -name "*.gz" -type f \
  -exec zgrep --color=auto --with-filename --line-number --label='{}' 'searchterm' "  ";"

Method using gunzip: Basically use gunzip for *.gz and tar for *.tar.gz archives:

# search "searchterm" in gz archives, note that the \ concatenates line breaks
# for *.tar.gz use → tar --to-stdout -xfvz file.tar.gz
  gunzip --to-stdout /mnt/dump/var/log/dpkg.log.2.gz \
  | grep --color=auto --with-filename --line-number --label=/mnt/dump/var/log/dpkg.log.2.gz "searchterm"
  # output something like
  #                              line number    
  # ┌──────── grep's label ───────┐ ┌┴┐ ┌───────────── content ────────────────────────────────────────────────┐
  # /mnt/dump/var/log/dpkg.log.2.gz:701:2012-06-07 18:19:26 configure php-pear 5.3.3-7+squeeze9 5.3.3-7+squeeze9

# together with find one can combine both methods, find would just list all *.gz files and those can be unsed 
# and executed by find's -exec syntax
  # normal find
  find /mnt/dump/var/log/ -name "*.gz"
  # with find + gunzip + grep together concatenated by \ (line breaks) and | (redirect std-output to next cmd)
  # '{}' or "{}" represent the found file-string from find in -exec syntax and ";" terminates -exec syntax
  find /mnt/dump/var/log/ -name "*.gz" \
  -exec echo "check" "{}" ";" \
  -exec bash -c "gunzip --to-stdout '{}' | grep --color=auto --with-filename --line-number --label='{}' 'searchterm' "  ";"

Example to search for an extension:

cd /var/www; find . -type f -exec grep --line-number --color=auto --with-filename '/extensions/MobileKeyV1/MobileKeyV1.php' '{}' \;

Process handling

htop (improved interactive taskmanager)
mostly see top below, no specific documentation here yet...
top (interactive taskmanager)
top or top -u www-data lists all processes or only for user www-data
or filter a specific command via top -c -p $(pgrep -d ',' --full 'mysql') list only those commands containing mysql
h get help and settings
c show command’s path instead of process names
C scroll by coordinates, you can scroll now in the process list via arrow keys now
k kill a process by PID
u filter for a specific user
q quit
Sorting: b → sorted column in bold; > or < → select next or previous column for sorting; F → a specific column can be selected for sorting; R → alters sorting toggles between descending or ascending order
z is coloring the display and Z customises colors
W write and save current settings of top
ps (taskmanager: processes’ snapshot)
ps -ef → list all processes, including services, use kill (number) or pkill (name)
ps -ef | grep 'command' → just show lines with “command”
ps axjf → show the processes as tree
kill processes by name
kills a process, for example killall memcached

Kill a hanginge apt-get:

ps -e | grep "apt" # search "apt"
 kill [THE_PID_processID]

Other commands

Changing permission of files/drectories

See also http://www.onlineconversion.com/html_chmod_calculator.htm for setting rights numerically.

chmod [options]... mode [,mode]... file... 
permissions can be set either by octal number code or by characters:
chmod u+x myscript.sh (add executable mode to a shell script by the user/owner: u → user, + → add, x → executable)
chmod u-x myscript.sh (remove executable mode to a shell script by the user/owner - → remove)
chmod a+r file (allow read permission to everyone: a → all users, + → add, r → readable)
chmod -R u=xrw,go=r ./directory (set «drwxr--r--» for files and directories recursively: -R → recursively, u → user, g → group, o → other people)
Note that doing chmod -R u=rw,go=r ./directory recursively without x removes (listing) access to the directory! Hence, don't set -R 644 to all directories recursively, only one by one:
find . -type d -exec echo 'chmod to 644 for: ' '{}' ';' -exec chmod u=rw,go=r '{}' ';'
This sets only directories at once recursively: it finds here (the dot .) for directory type and executes 2 commands. '{}' in quotes is the found (relative) path here. In detail:
find here in current (→ .) directory
directory type
start execution
the echo bash command
the found string of find (=relative path)
stop execute command option here
start another execution
the chmod command
set user rights
the found string of find (=relative path)
stop execute command option here
find . \
  -type d \
  -exec \
    echo 'chmod to 644 for: ' \
    '{}' \
    ';' \
  -exec \
    chmod \
      u=rw,go=r \
    '{}' \

Directory listing

A source helping to understand the Linux file system is: http://www.pathname.com/fhs/pub/fhs-2.3.html

ls --directory */
ls -d */ 
list only directories in the current path
ls -lap 
to better see all files, -l use long listing, -a list all files/directories, i.e. also hidden ones, -p to see folders marked with trailing “/”
ls -lap --sort=size or ls -lap --sort=size --reverse sort by size or in reversed order
ls -lap --sort=time or ls -lap --sort=time --reverse sort by time modified or in reversed order
ls -lap --sort=extension or ls -lap --sort=extension --reverse sort by file extension or in reversed order
show directory and files as a tree, e.g. directories at first, format trailing slash to directory/, one level down: tree --dirsfirst -FL 1 /etc/
use option --dirsfirst to list directories first
use option -F to append / to directories and append other characters (read "man tree")
use option -L 1 for only one level down
use option -d to show only directories
use option --prune to remove empty directories from the output

├── backups -> /mnt/dump/var/backups
├── tmp
└── www

Reading files

tail file 
displays the end of files; tail -9 mylog.log gets the last 9 lines
cat file 
concatenate files and print on the standard output
cat /etc/passwd to see user list
cat -v filename to display non-printing characters so they are visible. If the file has been edited on a Windows machine it can sometimes add CR/LF (VM) characters on the end of each line (hidden by default on most editors), so #!/bin/sh becomes #!/bin/shVM. This causes error: bad interpreter ^M. To remove such characters, use e.g. cat infilename | tr -d "\r" > outfilename
more file 
read as much as the screen can display and wait untill Enter shows a new line
less file 
the opposit of more

Editing/comparing files

nano file 
an easier editor on debian 4 with syntax highlighting set globally in /etc/nanorc
vi file 
complicated but enhanced command line editor with syntax highlighting, auto indent and macro functionality.
Basically vi has two modes: a writing mode and a command line mode. Esc i = insert/writing mode, Esc again = command mode; in command mode: ":u" = undo, ":w" = write/save, ":q" = quit, ":q!" = quit without save, ":x" = save & quit, "?word" = search for “word” (N → next, n → previous match)
vimdiff file1 file2
show differences with syntax highlighting in 2 columns (default). Ctrl + w w switches between 2-column-window parts
vimdiff -o file1 file2 horizontal instead of columns, ":q" = quit

User management

See also category: User management

adduser, deluser, passwd 
user management. More on user management: http://www.cae.wisc.edu/site/public/?title=linaccounts
sudo adduser USERNAME GROUPNAME adds existing user to existing group
sudo passwd USERNAME allows to reset passwords for users
id -u USERNAME get user id uid of USERNAME
grep '\bNUMBER\b' /etc/passwd | cut --delimiter=':' --fields=1 get user name by user id uid NUMBER

vnc = special vnc user, not sure whether useful.


wget http://... 
downloads the specified file. Option -c also allows an interrupted download to continue and --document-output another output name, e.g. wget http://... -c --document-output=outputname.html
scp user@hostname.net:/path/on/the/server /local/path 
downloads the specified file from a server to a local machine.

Install commands, package management

# To find package names or version use: 
aptitude search (keyword)
aptitude search '~i java' # show all installed packages matching java in package name
aptitude search '~i ~d java' # show all installed packages matching java in package name and description
apt-cache search (keyword)
apt-cache search (keyword) | sort
apt-cache search 'php.*sql'
apt-cache search 'elvis|vim'

# Clean-up:
apt-get autoremove  <package>  # remove automatically all unused packages
apt-get clean          # erase all downloaded archive files in /var/cache/apt/archives/
apt-get autoclean      # it only removes package files that can no longer be downloaded
apt-get remove <package> # remove package
apt-get purge <package>  # remove package *and* config files

# Install, installed packages
apt-get update     # retrieve new lists of packages

apt-cache policy  <package> # can show the installed and the remote version (install candidate)
apt-show-versions <package> # If installed, shows version information about one or more packages
aptitude search '~i' # show all installed packages

apt-get --simulate install <package> # simulate install
apt-get install <package> # install new packages (pkg is libc6 not libc6.deb)
apt-get install <package name>=<version> # install a specific version

dpkg -l (keyword)
# to list all packages with codes for status, e.g.'ii' for installed,'rc' for
# removed, but configuration files still there; 
dpkg -s (package name) # to get information on the status of a package

ls colors

Putty displays ls out in colors. These are:

  • Executable files: Green
  • Normal file : Normal
  • Directory: Blue
  • Symbolic link : Cyan
  • Pipe: Yellow
  • Socket: Magenta
  • Block device driver: Bold yellow foreground, with black background
  • Character device driver: Bold yellow foreground, with black background
  • Orphaned syminks : Blinking Bold white with red background
  • Missing links ( - and the files they point to) : Blinking Bold white with red background
  • Archives or compressed : Red (.tar, .gz, .zip, .rpm)
  • Image files : Magenta (.jpg, gif, bmp, png, tif)

OR (other source):

Type                Foreground Background
Folder/Directory    blue      (default)
Symlink             magenta   (default)
Socket              green     (default)
Pipe                brown     (default)
Executable          red       (default)
Block               blue      cyan
Character           blue      brown
Exec. w/ SUID       black     red
Exec. w/ SGID       black     cyan
Dir, o+w, sticky    black     green
Dir, o+w, unsticky  black     brown

Renaming file extensions

Linux does not support wildcards in the target of a move command the way Windows does. The equivalent for Windows:

rename *.jpeg *.jpg


# using Perl's rename command (check it by reading the manual page 'man rename'!)
# rename [options] perlexpr [ files ]
rename 's/\.jpeg$/.jpg/' *.jepg

# ordinary rename command (check it by reading the manual page 'man rename'!)
# rename [options] expression replacement file(s)
# rename .jpeg .jpg *.jpeg

# or using a for loop
# for thisfile in *.{jpeg,JPEG}; do
for thisfile in *.jpeg; do
  newfile=${thisfile/.jpeg/.jpg}; # saves to variable $newfile
  mv "$thisfile" "$newfile";

Related: To add a prefix use:

for i in *.jpg; do mv -i "$i" "XXX_$i"; done

Upload or Download via rsync

Simple echoing from source to destination between folders, preserving permission and timestamp, echoing deletions from left to right:

a) Simple method, size and timestamp based updates, from sourcepath/ to destinationpath/, deleting files in dest. that are not present on the receiver, skipping files that are newer on the receiver ("u", i.e. this is not a safe backup). Example:

sudo rsync -auv --delete-after /mnt/storage/ /mnt/storage2/

b) Simple backup method, creating a full copy on the receiving end (archive-verbose-human-readable). Examples:

sudo rsync -avh --delete-after --delete-excluded --exclude='*/thumb/*' /mnt/storage /mnt/BIG/bak-2017-12-20/storage
sudo rsync -avh --delete-after --delete-excluded --exclude='*/cache/*' /var/www     /mnt/BIG/bak-2017-12-20/www
sudo rsync -avh --delete-after --delete-excluded                       /var/etc     /mnt/BIG/bak-2017-12-20/etc
sudo rsync -avh --delete-after --delete-excluded   /mnt/dump/var/backups/DAILY   /mnt/BIG/bak-2017-12-20/backup-daily

c) Safest and slowest method, using c-option = checksum/hash for comparison, with deleting:

rsynm -acvh --delete-after sourcepath/ destinationpath/

Upload or download

# using remote shell program option rsh
rsync --rsh="ssh" --archive --verbose --compress [source directory] [user]@[instance ip]:[destination directory on instance]
rsync --rsh="ssh" --archive --verbose --compress [user]@[instance ip]:[destination directory on instance] [source directory]
# -a, --archive               archive mode
# -b, --backup                make backups
# -c, --checksum              skip based on checksum, not mod-time & size
# -e, --rsh=COMMAND           specify the remote shell to use
# -i, --itemize-changes       output a change-summary for all updates
# -l, --links                 copy symlinks as symlinks
# -r, --recursive             recurse into directories
# -t, --times                 preserve modification times
# -z, --compress              compress file data during the transfer
rsync --recursive --times --verbose --progress --ignore-existing --checksum --backup --itemize-changes --stats --rsh='ssh'  '/home/me/Files/myfile.xml' 'my-remote-user-name@'

Good resource: http://www.jveweb.net/en/archives/2010/11/synchronizing-folders-with-rsync.html


With the general zip, p7zip-full etc. installed, the following commands work (-mx=9 = max. compression):

 # -9 is optional, higher compression
 zip archivename.zip file.sql /folder -9  
 unzip archivename.zip
 # for real good compression use:
 # (a = add, -mx7 and -mx9 = higher compression, x = extract)
 7z a -mx=9 archivename.7z file.sql /folder
 7z x archivename.7z

Note: Because 7z will not store owner or group information, the option -r = recurse into subfolders is not recommended. To archive folders use (where ! is the folder name, as in WinSCP custom commands):

 # tar + 7z a folder: cf = create file, a = add, -si = Read data from StdIn
 tar cf - "!" | 7za a -si -mx7 "!.tar.7z"
 # tar/7z to folder: x = eXtract with full paths, -so = Write data to StdOut, bd = Disable percentage indicator
 # Unpack with xf = extract file
 7za x -so -bd "!" | tar xf -
 # Example for command line, with sudo (twice!)
 cd /mnt/dump/var/log/; sudo tar cf - "nginx" | sudo 7za a -si -mx7 "nginx.logs.2013-xx-xx.tar.7z"

Note: inside WinSCP, the tar/7z command to folder results in error (ok in ssh), but simply selecting SKIP results in correct result, this seems to be more a bug of the way WinSCP handles messages than of the process (?).

# find files larger than 50MB, but exclude already compressed gz-files and zip-files, and compress them by gzip in verbose mode
find . -type f -iname '*' -size +50M \
  -not -iname '*.gz' \
  -not -iname '*.zip' \
  -exec gzip --verbose '{}' ';'

Problem searching

Who logged on? See: tail --lines=100 /mnt/dump/var/log/auth.log

Apache processes: See if the apache process count starts rising again over time: ps uax | grep apache2 | wc The first number will be the count+1 of apache processes (The extra 1 is the grep). 7 is ok, ram observed 150 on overload.

Writing shell scripts

Very helpful: http://www.calpoly.edu/~rasplund/script.html

Overlong text files

SQL-Dumps can be greater than 1GB of text and difficult to handle using an editor. To extract one table from a backup it is possible to restore the entire database into a newly created db, then re-export only the table in question, or use a combination of vi to find the approximate line numbers (e.g. 3430-3465) and sed:

sed --quiet "3430,3465 p;" 2012-09-22_metawiki.sql > 2012-09-22_metawiki.user.sql

(p = print)