Jump to content

brent

Administrators
  • Posts

    125
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by brent

  1. If the Red Hat Insights site is reflecting a different hostname simply run the following to make the insights-client check back in # insights-client --version
  2. Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system. Prerequisites To follow this tutorial, you will need the following: Ubuntu 20.04 server. Step 1 — Installing Docker The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package. First, update your existing list of packages: sudo apt update Next, install a few prerequisite packages that let apt use packages over HTTPS: sudo apt install apt-transport-https ca-certificates curl software-properties-common Then add the GPG key for the official Docker repository to your system: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" This will also update our package database with the Docker packages from the newly added repo. Make sure you are about to install from the Docker repo instead of the default Ubuntu repo: sudo apt-cache policy docker-ce You’ll see output like this, although the version number for Docker may be different: docker-ce: Installed: (none) Candidate: 5:19.03.9~3-0~ubuntu-focal Version table: 5:19.03.9~3-0~ubuntu-focal 500 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 20.04 (focal). Finally, install Docker: sudo apt install docker-ce Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running: sudo systemctl status docker The output should be similar to the following, showing that the service is active and running: Output ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 24321 (dockerd) Tasks: 8 Memory: 46.4M CGroup: /system.slice/docker.service └─24321 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial. Step 2 — Executing the Docker Command Without Sudo (Optional) By default, the docker command can only be run by the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this: Output docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'. If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group: sudo usermod -aG docker ${USER} To apply the new group membership, log out of the server and back in, or type the following: su - ${USER} You will be prompted to enter your user’s password to continue. Confirm that your user is now added to the docker group by typing: groups Output sammy sudo docker If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using: sudo usermod -aG docker username The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.
  3. Portainer upgrade If you already have Portainer installed, you’ll need to stop and remove it from your system before you upgrade the container. To do that, run this command: sudo docker stop portainer && sudo docker rm portainer You will probably be prompted for your sudo password. Enter that and then the system will remove the Portainer container, but it will NOT delete your Portainer data as we didn’t remove that. Next, you’ll want to pull the latest Portainer image: sudo docker pull portainer/portainer-ce:latest Once that is done, you’re ready to deploy the newest version of Portainer: sudo docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest Now you can go to http://your-server-address:9000 and login. Note: Doing this will NOT remove your other applications/containers/etc.
  4. want to run a cron job that should run a specific shell script /home/jobs/sync.cache.sh every minute. How do I use crontab to execute script every minute on Linux or Unix-like system? How can I run cron job every minute on Ubuntu Linux? Cron is one of the most useful tool in a Linux or UNIX like operating systems. It is usually used for sysadmin jobs such as backups or cleaning /tmp/ directories and more. Let us see how can we run cron job every one minute on Linux, *BSD and Unix-like systems. Run cron job every minute The syntax is: * * * * * /path/to/your/script To run a script called /home/vivek/bin/foo, type the crontab command: $ crontab -e Append the following job: * * * * * /home/vivek/bin/foo Save and close the file. How does it work? The syntax for crontab is as follows: * * * * * command to be executed - - - - - | | | | | | | | | ----- Day of week (0 - 7) (Sunday=0 or 7) | | | ------- Month (1 - 12) | | --------- Day of month (1 - 31) | ----------- Hour (0 - 23) ------------- Minute (0 - 59) The asterisk (*) operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month. An asterisk in the every field means run given command/script every minute. A note about using /etc/cron.d/ directory If you put cronjob in /etc/cron.d/ directory you must provide the username to run the task as in the task definition: * * * * * USERNAME /path/to/your/script For example, run a script that uses rsync to replicate changed files. Create a file named /etc/crond.d/rsync.job $ sudo vi /etc/crond.d/rsync.job Append the following: PATH=/sbin:/usr/sbin:/bin:/usr/bin # Start job every 1 minute * * * * * root /root/bin/replication.sh # Another example to set up cron job every 1 minute is a commonly used in cron schedule. * * * * * root /usr/bin/perl /usr/lib/cgi-bin/check.for.errors.cgi Save and close the file. Here is a sample /root/bin/replication.sh file: #!/bin/bash # Usage: A sample shell script to replicate newly added # HTML files/images/js etc on all $servers i.e. poor mans # file replication service ;) # # Author: Vivek Gite, under GPL v2.0+ # # Note: Set ssh pub key based auth to work this script # ------------------------------------------------------------ _rsync="/usr/bin/rsync" _rsync_opt='-az -H --delete --numeric-ids --exclude=cache/css --exclude=tmp/js' # user name for ssh u="vivek" # server nodes servers="node01 node02" # Source and dest S='/home/vivek/wwwfiles/' D='/home/vivek/wwwfiles' # Let us loop it and do it for b in ${servers} do ${_rsync} ${_rsync_opt} "$@" ${S} ${u}@${b}:${D} done A note about dealing with race condition when running cron job every minute We are going to use the flock command which manages flock(2) locks from within shell scripts or from the command line. Modify your script as follows to ensure only one instance of a Bash script is running every minute: #!/bin/bash ## Copyright (C) 2009 Przemyslaw Pawelczyk <[email protected]> ## ## This script is licensed under the terms of the MIT license. ## Source https://gist.github.com/przemoc/571091 ## https://opensource.org/licenses/MIT # # Lockable script boilerplate ### HEADER ### LOCKFILE="/var/lock/`basename $0`" LOCKFD=99 # PRIVATE _lock() { flock -$1 $LOCKFD; } _no_more_locking() { _lock u; _lock xn && rm -f $LOCKFILE; } _prepare_locking() { eval "exec $LOCKFD>\"$LOCKFILE\""; trap _no_more_locking EXIT; } # ON START _prepare_locking # PUBLIC exlock_now() { _lock xn; } # obtain an exclusive lock immediately or fail exlock() { _lock x; } # obtain an exclusive lock shlock() { _lock s; } # obtain a shared lock unlock() { _lock u; } # drop a lock # Simplest example is avoiding running multiple instances of script. exlock_now || exit 1 ### BEGIN OF SCRIPT ### _rsync="/usr/bin/rsync" _rsync_opt='-az -H --delete --numeric-ids --exclude=cache/css --exclude=tmp/js' # user name for ssh u="vivek" # server nodes servers="node01 node02" # Source and dest S='/home/vivek/wwwfiles/' D='/home/vivek/wwwfiles' # Let us loop it and do it for b in ${servers} do ${_rsync} ${_rsync_opt} "$@" ${S} ${u}@${b}:${D} done ### END OF SCRIPT ### # Remember! Lock file is removed when one of the scripts exits and it is # the only script holding the lock or lock is not acquired at all.
  5. First, we will download the Node Exporter on all machines : check the download version available from here. wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz Extract the downloaded archive tar -xf node_exporter-1.2.2.linux-amd64.tar.gz Move the node_exporter binary to /usr/local/bin: sudo mv node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin Remove the residual files with: rm -r node_exporter-1.2.2.linux-amd64* Next, we will create users and service files for node_exporter. For security reasons, it is always recommended to run any services/daemons in separate accounts of their own. Thus, we are going to create an user account for node_exporter. We have used the -r flag to indicate it is a system account, and set the default shell to /bin/false using -s to prevent logins. sudo useradd -rs /bin/false node_exporter Then, we will create a systemd unit file so that node_exporter can be started at boot. sudo nano /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter [Install] WantedBy=multi-user.target Since we have created a new unit file, we must reload the systemd daemon, set the service to always run at boot and start it : sudo systemctl daemon-reload sudo systemctl enable node_exporter sudo systemctl start node_exporter sudo systemctl status node_exporter Configure UFW / Firewall Ubuntu : sudo ufw allow from 10.0.0.46 to any port 9100 sudo ufw status numbered
  6. Prometheus is an open-source system and service monitoring and alerting tool used for recording real-time services and collecting metrics in a time-series database. It s written in Go and licensed under the Apache 2 License originally developed by SoundCloud.In this tutorial, we will show you how to install Prometheus on Ubuntu 20.04 server, which can be done easily if you follow it step by step Requirements: For the purposes of this tutorial, we will use an Ubuntu20.04 VPS. Access to the root user account (or a user with sudo privileges) Step 1: Log in to the Server & Update the Server OS Packages First, log in to your Ubuntu 20.04 server via SSH as the root use ssh root@IP_ADDRESS -p PORT_NUMBER Don’t forget to replace IP_Address and Port_Number with your server’s actual IP address and the SSH port number. Also, you should replace ‘root’ with the username of the admin account if needed. Once you are in, run the following commands to update the package index and upgrade all installed packages to the latest available version. apt-get update apt-get upgrade Step 2. Creating Prometheus System Users and Directory The Prometheus server requires a service user account to run. You can name your user however you like, but we will create a user named prometheus. This user will be a system user (-r) who will be unable to get a shell (-s /bin/false) useradd --no-create-home -rs /bin/false prometheus Also, we need to create directories for configuration files and other Prometheus data. mkdir /etc/prometheus mkdir /var/lib/prometheus Now we will have to update the group and user ownership on the newly created directories. chown prometheus:prometheus /etc/prometheus chown prometheus:prometheus /var/lib/prometheus Step 3. Download Prometheus Binary File Prometheus is included by default on the Ubuntu 20.04 repositories. apt-cache policy prometheus prometheus: Installed: (none) Candidate: 2.15.2+ds-2 Version table: 2.15.2+ds-2 500 500 http://us.archive.ubuntu.com/ubuntu focal/universe amd64 Packages However, the Prometheus release version provided by the default Ubuntu repositories may not be up-to-date. At the time of writing this article, the latest stable version of Prometheus is 2.30.3. But before downloading, visit the official Prometheus downloads page and check if there is a new version available. You can download it using the following command: wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz Once the tarball is downloaded, verify the tarball checksum with the following command: sha256sum prometheus-2.30.3.linux-amd64.tar.gz You should see an output that looks similar to the one below: 1ccd386d05f73a98b69aa5e0ed31fffac95cd9dadf7df1540daf2f182c5287e2 prometheus-2.30.3.linux-amd64.tar.gz Compare the hash value from the above output to the checksum value on the Prometheus download page. If they match, that means the file’s integrity is validated. Now you have successfully downloaded the Prometheus file and now you will extract it to the /opt directory using the tar command: tar xvzf prometheus-2.30.3.linux-amd64.tar.gz -C /opt Next, you need to copy the binary files to /usr/local/bin directory and fix the permissions. This is done with the following commands: mv /opt/prometheus-2.30.3.linux-amd64/prometheus /opt/prometheus-2.30.3.linux-amd64/promtool /usr/local/bin/ chown prometheus:prometheus /usr/local/bin/prometheus /usr/local/bin/promtool Also, we need to copy the consoles and console_libraries directories to Prometheus configuration directory, /etc/prometheus mv /opt/prometheus-2.30.3.linux-amd64/consoles /opt/prometheus-2.30.3.linux-amd64/console_libraries /etc/prometheus/ chown -R prometheus:prometheus /etc/prometheus/consoles /etc/prometheus/console_libraries Step 4: Create Prometheus Configuration file Prometheus configuration file has been prepared and available on the extracted archive folder, and you need just to copy it to the Prometheus configuration /etc/prometheus directory. mv /opt/prometheus-2.30.3.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml chown prometheus:prometheus /etc/prometheus/prometheus.yml The content of the prometheus.yml file: # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: "prometheus" # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ["localhost:9090"] The configuration is set up to scrape every 15 seconds and Prometheus listens on port 9090. Linux server scrape example: global: scrape_interval: 1s scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['prometheus.linux-network.home:9090'] - targets: ['plex.linux-network.home:9100'] - targets: ['grafana.linux-network.home:9100'] - targets: ['NS1.linux-network.home:9100'] - targets: ['NS2.linux-network.home:9100'] - targets: ['WEB1.linux-network.home:9100'] - targets: ['DB1.linux-network.home:9100'] - targets: ['PVE2.linux-network.home:9100'] Step 5: Create Prometheus Systemd Service file Now we need to create a system service file. nano /etc/systemd/system/prometheus.service In that file, add the following content: [Unit] Description=Prometheus Wants=network-online.target After=network-online.target [Service] User=prometheus Group=prometheus Type=simple ExecStart=/usr/local/bin/prometheus \ --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries [Install] WantedBy=multi-user.target After adding the content save and close the file. To use the newly created service you will have to reload the daemon services, Use the below command to reload daemon services. systemctl daemon-reload You can now start and enable Prometheus service using the below commands: systemctl start prometheus systemctl enable prometheus To check and verify the status of your Prometheus service, run the following command: systemctl status prometheus Output : ● prometheus.service - Prometheus Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-23 19:15:21 UTC; 4s ago Main PID: 9884 (prometheus) Tasks: 1 (limit: 2245) Memory: 336.0K CGroup: /system.slice/prometheus.service └─9884 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries Prometheus installation and configuration is set up, You can see status Active: active(running) Now Prometheus service is up and running and you can access it from any web browser. http://Your_server_IP:9090 To check the status of your node, go to Status > Targets. That’s it! The installation of Prometheus on Ubuntu 20.04 has been completed.
  7. Windows users are used to creating shortcuts to have fast access to their files and folders. This is is especially useful when these are buried deep in their system. This feature isn't as obvious on most Linux systems as it is on Windows. Create a shortcut on a Unix-like operating system using a symlink. Create Symlink in Linux Terminal way (the link will appear in the folder the terminal points to): ln -s /folderorfile/link/will/point/to /name/of/the/link
  8. Installing CIFS Utilities Packages To mount a Windows share on a Linux system, first you need to install the CIFS utilities package. Installing CIFS utilities on CentOS and Fedora: sudo dnf install cifs-utils Auto Mounting sudo nano /etc/fstab Add the following line to the file: # <file system> <dir> <type> <options> <dump> <pass> //WIN_SHARE_IP/share_name /mnt/win_share cifs credentials=/etc/win-credentials,file_mode=0755,dir_mode=0755 0 0 Run the following command to mount the share: mount -a Creating Credential File For better security it is recommended to use a credentials file, which contains the share username, password and domain. /etc/win-credentials The credentials file has the following format: username = user password = password domain = domain The file must not be readable by users. To set the correct permissions and ownership run: sudo chown root: /etc/win-credentials sudo chmod 600 /etc/win-credentials Create Symlink in Linux 8. Create a shortcut to your new mounted file share: Terminal way (the link will appear in the folder the terminal points to): ln -s /folderorfile/link/will/point/to /name/of/the/link
  9. How to Import and Export Databases Export To Export a database, open up terminal, making sure that you are not logged into MySQL and type, mysqldump -u [username] -p [database name] > [database name].sql The database that you selected in the command will now be exported to your droplet. Import To import a database, first create a new blank database in the MySQL shell to serve as a destination for your data. CREATE DATABASE newdatabase; Then log out of the MySQL shell and type the following on the command line: mysql -u [username] -p newdatabase < [database name].sql With that, your chosen database has been imported into your destination database in MySQL. Create a database user login to mysql mysql -u root -p run the following command mysql> CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'user_password'; Replace newuser with the new user name, and user_password with the user password. Grant Privileges to a MySQL User Account ALL PRIVILEGES – Grants all privileges to a user account. CREATE – The user account is allowed to create databases and tables. DROP - The user account is allowed to drop databases and tables. DELETE - The user account is allowed to delete rows from a specific table. INSERT - The user account is allowed to insert rows into a specific table. SELECT – The user account is allowed to read a database. UPDATE - The user account is allowed to update table rows. To grant specific privileges to a user account, use the following syntax: Grand all privileges to a user account over a specific database: mysql> GRANT ALL PRIVILEGES ON database_name.* TO 'database_user'@'localhost'; Grand all privileges to a user account on all databases: mysql> GRANT ALL PRIVILEGES ON *.* TO 'database_user'@'localhost'; Grand all privileges to a user account over a specific table from a database: mysql> GRANT ALL PRIVILEGES ON database_name.table_name TO 'database_user'@'localhost'; Grant multiple privileges to a user account over a specific database: mysql> GRANT SELECT, INSERT, DELETE ON database_name.* TO database_user@'localhost'; Display MySQL User Account Privileges To find the privilege(s) granted to a specific MySQL user account, use the SHOW GRANTS statement: mysql> SHOW GRANTS FOR 'database_user'@'localhost'; The output will look something like below: +---------------------------------------------------------------------------+ | Grants for database_user@localhost | +---------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'database_user'@'localhost' | | GRANT ALL PRIVILEGES ON `database_name`.* TO 'database_user'@'localhost' | +---------------------------------------------------------------------------+ 2 rows in set (0.00 sec) Revoke Privileges from a MySQL User Account The syntax to revoke one or more privileges from a user account is almost identical as when granting privileges. To revoke all privileges from a user account over a specific database, run the following command: mysql> REVOKE ALL PRIVILEGES ON database_name.* FROM 'database_user'@'localhost'; Remove an Existing MySQL User Account To delete a MySQL user account use the DROP USER statement: mysql> DROP USER 'user'@'localhost'
  10. Security Enhanced Linux or SELinux is a security mechanism built into the Linux kernel used by RHEL-based distributions. SELinux adds an additional layer of security to the system by allowing administrators and users to control access to objects based on policy rules. SELinux policy rules specify how processes and users interact with each other as well as how processes and users interact with files. When there is no rule explicitly allowing access to an object, such as for a process opening a file, access is denied. SELinux has three modes of operation: Enforcing: SELinux allows access based on SELinux policy rules. Permissive: SELinux only logs actions that would have been denied if running in enforcing mode. This mode is useful for debugging and creating new policy rules. Disabled: No SELinux policy is loaded, and no messages are logged. By default, in CentOS 8, SELinux is enabled and in enforcing mode. It is highly recommended to keep SELinux in enforcing mode. However, sometimes it may interfere with the functioning of some application, and you need to set it to the permissive mode or disable it completely. In this tutorial, we will explain to disable SELinux on CentOS 8. Prerequisites Only the root user or a user with sudo privileges can change the SELinux mode. Checking the SELinux Mode Use the sestatus command to check the status and the mode in which SELinux is running: sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 31 The output above shows that SELinux is enabled and set to enforcing mode. Changing SELinux Mode to Permissive When enabled, SELinux can be either in enforcing or permissive mode. You can temporarily change the mode from targeted to permissive with the following command: sudo setenforce 0 However, this change is valid for the current runtime session only and do not persist between reboots. To permanently set the SELinux mode to permissive, follow the steps below: Open the /etc/selinux/config file and set the SELINUX mod to permissive: /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted Save the file and run the setenforce 0 command to change the SELinux mode for the current session: sudo shutdown -r now Disabling SELinux Instead of disabling SELinux, it is strongly recommended to change the mode to permissive. Disable SELinux only when required for the proper functioning of your application. Perform the steps below to disable SELinux on your CentOS 8 system permanently: Open the /etc/selinux/config file and change the SELINUX value to disabled: /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted Save the file and reboot the system: sudo shutdown -r now When the system is booted, use the sestatus command to verify that SELinux has been disabled: sestatus The output should look like this: SELinux status: disabled Conclusion SELinux is a mechanism to secure a system by implementing mandatory access control (MAC). SELinux is enabled by default on CentOS 8 systems, but it can be disabled by editing the configuration file and rebooting the system. To learn more about the powerful features of SELinux, visit the CentOS SELinux guide.
  11. Introduction The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software. In this guide, you will install an Apache web server with virtual hosts on your CentOS 8 server. Prerequisites You will need the following to complete this guide: A non-root user with sudo privileges configured on your server. Ensure that a basic firewall is configured. Step 1 — Installing Apache Apache is available within CentOS’s default software repositories, which means you can install it with the dnf package manager. As the non-root sudo user configured in the prerequisites, install the Apache package: sudo dnf install httpd After confirming the installation, dnf will install Apache and all required dependencies. If you also plan to configure Apache to serve content over HTTPS, you will also want to open up port 443 by enabling the https service: sudo firewall-cmd --permanent --add-service=http sudo firewall-cmd --permanent --add-service=https Next, reload the firewall to put these new rules into effect: sudo firewall-cmd --reload After the firewall reloads, you are ready to start the service and check the web server. Step 2 — Checking your Web Server Apache does not automatically start on CentOS once the installation completes, so you will need to start the Apache process manually: sudo systemctl start httpd Verify that the service is running with the following command: sudo systemctl status httpd You will receive an active status when the service is running: Output ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disa> Active: active (running) since Thu 2020-04-23 22:25:33 UTC; 11s ago Docs: man:httpd.service(8) Main PID: 14219 (httpd) Status: "Running, listening on: port 80" Tasks: 213 (limit: 5059) Memory: 24.9M CGroup: /system.slice/httpd.service ├─14219 /usr/sbin/httpd -DFOREGROUND ├─14220 /usr/sbin/httpd -DFOREGROUND ├─14221 /usr/sbin/httpd -DFOREGROUND ├─14222 /usr/sbin/httpd -DFOREGROUND └─14223 /usr/sbin/httpd -DFOREGROUND ... As this output indicates, the service has started successfully. However, the best way to test this is to request a page from Apache. You can access the default Apache landing page to confirm that the software is running properly through your IP address. If you do not know your server’s IP address, you can get it a few different ways from the command line. Type q to return to the command prompt and then type: hostname -I This command will display all of the host’s network addresses, so you will get back a few IP addresses separated by spaces. You can try each in your web browser to determine whether they work. Alternatively, you can use curl to request your IP from icanhazip.com, which will give you your public IPv4 address as read from another location on the internet: curl -4 icanhazip.com When you have your server’s IP address, enter it into your browser’s address bar: http://your_server_ip You’ll see the default CentOS 8 Apache web page: This page indicates that Apache is working correctly. It also includes some basic information about important Apache files and directory locations. Step 3 — Managing the Apache Process Now that the service is installed and running, you can now use different systemctl commands to manage the service. To stop your web server, type: sudo systemctl stop httpd To start the web server when it is stopped, type: sudo systemctl start httpd To stop and then start the service again, type: sudo systemctl restart httpd If you are simply making configuration changes, Apache can often reload without dropping connections. To do this, use this command: sudo systemctl reload httpd By default, Apache is configured to start automatically when the server boots. If this is not what you want, disable this behavior by typing: sudo systemctl disable httpd To re-enable the service to start up at boot, type: sudo systemctl enable httpd Apache will now start automatically when the server boots again. The default configuration for Apache will allow your server to host a single website. If you plan on hosting multiple domains on your server, you will need to configure virtual hosts on your Apache web server. Step 4 — Setting Up Virtual Hosts (Recommended) When using the Apache web server, you can use virtual hosts (if you are more familiar with Nginx, these are similar to server blocks) to encapsulate configuration details and host more than one domain from a single server. In this step, you will set up a domain called example.com, but you should replace this with your own domain name. Apache on CentOS 8 has one virtual host enabled by default that is configured to serve documents from the /var/www/html directory. While this works well for a single site, it can become unwieldy if you are hosting multiple sites. Instead of modifying /var/www/html, you will create a directory structure within /var/www for the example.com site, leaving /var/www/html in place as the default directory to be served if a client request doesn’t match any other sites. Create the html directory for example.com as follows, using the -p flag to create any necessary parent directories: sudo mkdir -p /var/www/example.com/html Create an additional directory to store log files for the site: sudo mkdir -p /var/www/example.com/log Next, assign ownership of the html directory with the $USER environmental variable: sudo chown -R $USER:$USER /var/www/example.com/html Make sure that your web root has the default permissions set: sudo chmod -R 755 /var/www Next, create a sample index.html page using vi or your favorite editor: sudo vi /var/www/example.com/html/index.html Press i to switch to INSERT mode and add the following sample HTML to the file: /var/www/example.com/html/index.html <html> <head> <title>Welcome to Example.com!</title> </head> <body> <h1>Success! The example.com virtual host is working!</h1> </body> </html> Save and close the file by pressing ESC, typing :wq, and pressing ENTER. With your site directory and sample index file in place, you are almost ready to create the virtual host files. Virtual host files specify the configuration of your separate sites and tell the Apache web server how to respond to various domain requests. Before you create your virtual hosts, you will need to create a sites-available directory to store them in. You will also create the sites-enabled directory that tells Apache that a virtual host is ready to serve to visitors. The sites-enabled directory will hold symbolic links to virtual hosts that we want to publish. Create both directories with the following command: sudo mkdir /etc/httpd/sites-available /etc/httpd/sites-enabled Next, you will tell Apache to look for virtual hosts in the sites-enabled directory. To accomplish this, edit Apache’s main configuration file using vi or your favorite text editor and add a line declaring an optional directory for additional configuration files: sudo vi /etc/httpd/conf/httpd.conf Press capital G to navigate towards the end of the file. Then press i to switch to INSERT mode and add the following line to the very end of the file: /etc/httpd/conf/httpd.conf ... # Supplemental configuration # # Load config files in the "/etc/httpd/conf.d" directory, if any. IncludeOptional conf.d/*.conf IncludeOptional sites-enabled/*.conf Save and close the file when you are done adding that line. Now that you have your virtual host directories in place, you will create your virtual host file. Start by creating a new file in the sites-available directory: sudo vi /etc/httpd/sites-available/example.com.conf Add in the following configuration block, and change the example.com domain to your domain name: /etc/httpd/sites-available/example.com.conf <VirtualHost *:80> ServerName www.example.com ServerAlias example.com DocumentRoot /var/www/example.com/html ErrorLog /var/www/example.com/log/error.log CustomLog /var/www/example.com/log/requests.log combined </VirtualHost> Copy This will tell Apache where to find the root directly that holds the publicly accessible web documents. It also tells Apache where to store error and request logs for this particular site. Save and close the file when you are finished. Now that you have created the virtual host files, you will enable them so that Apache knows to serve them to visitors. To do this, create a symbolic link for each virtual host in the sites-enabled directory: sudo ln -s /etc/httpd/sites-available/example.com.conf /etc/httpd/sites-enabled/example.com.conf Your virtual host is now configured and ready to serve content. Before restarting the Apache service, let’s make sure that SELinux has the correct policies in place for your virtual hosts. Step 5 — Adjusting SELinux Permissions for Virtual Hosts (Recommended) SELinux is a Linux kernel security module that brings heightened security for Linux systems. CentOS 8 comes equipped with SELinux configured to work with the default Apache configuration. Since you changed the default configuration by setting up a custom log directory in the virtual hosts configuration file, you will receive an error if you attempt to start the Apache service. To resolve this, you need to update the SELinux policies to allow Apache to write to the necessary files. There are different ways to set policies based on your environment’s needs as SELinux allows you to customize your security level. This step will cover two methods of adjusting Apache policies: universally and on a specific directory. Adjusting policies on directories is more secure, and is therefore the recommended approach. Adjusting Apache Policies Universally Setting the Apache policy universally will tell SELinux to treat all Apache processes identically by using the httpd_unified Boolean. While this approach is more convenient, it will not give you the same level of control as an approach that focuses on a file or directory policy. Run the following command to set a universal Apache policy: sudo setsebool -P httpd_unified 1 The setsebool command changes SELinux Boolean values. The -P flag will update the boot-time value, making this change persist across reboots. httpd_unified is the Boolean that will tell SELinux to treat all Apache processes as the same type, so you enabled it with a value of 1. Adjusting Apache Policies on a Directory Individually setting SELinux permissions for the /var/www/example.com/log directory will give you more control over your Apache policies, but may also require more maintenance. Since this option is not universally setting policies, you will need to manually set the context type for any new log directories specified in your virtual host configurations. First, check the context type that SELinux gave the /var/www/example.com/log directory: sudo ls -dlZ /var/www/example.com/log/ This command lists and prints the SELinux context of the directory. You will receive output similar to the following: Output drwxr-xr-x. 2 root root unconfined_u:object_r:httpd_sys_content_t:s0 6 Apr 23 23:51 /var/www/example.com/log/ The current context is httpd_sys_content_t, which tells SELinux that the Apache process can only read files created in this directory. In this tutorial, you will change the context type of the /var/www/example.com/log directory to httpd_log_t. This type will allow Apache to generate and append to web application log files: sudo semanage fcontext -a -t httpd_log_t "/var/www/example.com/log(/.*)?" Next, use the restorecon command to apply these changes and have them persist across reboots: sudo restorecon -R -v /var/www/example.com/log The -R flag runs this command recursively, meaning it will update any existing files to use the new context. The -v flag will print the context changes the command made. You will receive the following output confirming the changes: Output Relabeled /var/www/example.com/log from unconfined_u:object_r:httpd_sys_content_t:s0 to unconfined_u:object_r:httpd_log_t:s0 You can list the contexts once more to see the changes: sudo ls -dlZ /var/www/example.com/log/ The output reflects the updated context type: Output drwxr-xr-x. 2 root root unconfined_u:object_r:httpd_log_t:s0 6 Apr 23 23:51 /var/www/example.com/log/ Now that the /var/www/example.com/log directory is using the httpd_log_t type, you are ready to test your virtual host configuration. Step 6 — Testing the Virtual Host (Recommended) Once the SELinux context has been updated with either method, Apache will be able to write to the /var/www/example.com/log directory. You can now successfully restart the Apache service: sudo systemctl restart httpd List the contents of the /var/www/example.com/log directory to see if Apache created the log files: ls -lZ /var/www/example.com/log You’ll receive confirmation that Apache was able to create the error.log and requests.log files specified in the virtual host configuration: Output -rw-r--r--. 1 root root system_u:object_r:httpd_log_t:s0 0 Apr 24 00:06 error.log -rw-r--r--. 1 root root system_u:object_r:httpd_log_t:s0 0 Apr 24 00:06 requests.log Now that you have your virtual host set up and SELinux permissions updated, Apache will now serve your domain name. You can test this by navigating to http://example.com, where you should see something like this: This confirms that your virtual host is successfully configured and serving content. Repeat Steps 4 and 5 to create new virtual hosts with SELinux permissions for additional domains.
  12. To mount a windows share on Ubuntu Server: 1. Share the folder on your windows box 2. Create a mount point in /mnt: sudo mkdir /mnt/windows-share 3. Install CIFS-UTILS sudo apt-get install cifs-utils 4. Create a credential file for the windows share. **(You can name the credential file anything you want)** sudo nano /etc/cifs-credentials 5. username=username password=password domain=example.com 6. On your Ubuntu server open the file: sudo nano /etc/fstab edit the file with your information //WIN_SHARE_IP/share_name /mnt/win_share cifs credentials=/etc/win-credentials,file_mode=0755,dir_mode=0755 0 0 7. Run to mount the share: sudo mount -a Create Symlink in Linux 8. Create a shortcut to your new mounted file share: (Article here) Terminal way (the link will appear in the folder the terminal points to): ln -s /folderorfile/link/will/point/to /name/of/the/link
  13. Configure static IP address using Netplan Netplan network configuration had been first introduced to Ubuntu 18.04 LTS Bionic Beaver. It is available to all new Ubuntu 18.04 installations. Ubuntu Server To configure a static IP address on your Ubuntu 20.04 server you need to modify a relevant netplan network configuration file within /etc/netplan/ directory. This static configuration has been depreciated network: ethernets: enp0s3: addresses: [192.168.1.3/24] gateway4: 192.168.1.1 nameservers: addresses: [4.2.2.2, 8.8.8.8] version: 2 This is the new static configuration. network: ethernets: enp0s3: addresses: [192.168.1.3/24] routes: - to: default via: 192.168.1.99 nameservers: addresses: [4.2.2.2, 8.8.8.8] version: 2 Once ready apply changes with: $ sudo netplan apply In case you run into some issues execute: $ sudo netplan --debug apply
  14. Tautulli will be installed to /opt/Tautulli. Open a terminal Install Git Ubuntu/Debian: sudo apt-get install git-core Fedora: sudo yum install git Install prerequisites: Ubuntu/Debian: sudo apt-get install python python-setuptools tzdata Fedora: sudo yum install python python2-setuptools Type: cd /opt Type: sudo git clone https://github.com/Tautulli/Tautulli.git Optional: Ubuntu/Debian: sudo addgroup tautulli && sudo adduser --system --no-create-home tautulli --ingroup tautulli CentOS/Fedora: sudo adduser --system --no-create-home tautulli sudo chown tautulli:tautulli -R /opt/Tautulli Type: cd Tautulli to start Tautulli Type: python Tautulli.py Tautulli will be loaded in your browser or listening on http://localhost:8181 To run Tautulli in the background on startup: # Tautulli - Stats for Plex Media Server usage # # Service Unit file for systemd system manager # # INSTALLATION NOTES # # 1. Copy this file into your systemd service unit directory (often '/lib/systemd/system') # and name it 'tautulli.service' with the following command: # sudo cp /opt/Tautulli/init-scripts/init.systemd /lib/systemd/system/tautulli.service # # 2. Edit the new tautulli.service file with configuration settings as required. # More details in the "CONFIGURATION NOTES" section shown below. # # 3. Enable boot-time autostart with the following commands: # sudo systemctl daemon-reload # sudo systemctl enable tautulli.service # # 4. Start now with the following command: # sudo systemctl start tautulli.service # # CONFIGURATION NOTES # # - The example settings in this file assume that you will run Tautulli as user: tautulli # - The example settings in this file assume that Tautulli is installed to: /opt/Tautulli # # - To create this user and give it ownership of the Tautulli directory: # Ubuntu/Debian: sudo addgroup tautulli && sudo adduser --system --no-create-home tautulli --ingroup tautulli # CentOS/Fedora: sudo adduser --system --no-create-home tautulli # sudo chown tautulli:tautulli -R /opt/Tautulli # # - Adjust ExecStart= to point to: # 1. Your Tautulli executable # - Default: /opt/Tautulli/Tautulli.py # 2. Your config file (recommended is to put it somewhere in /etc) # - Default: --config /opt/Tautulli/config.ini # 3. Your datadir (recommended is to NOT put it in your Tautulli exec dir) # - Default: --datadir /opt/Tautulli # # - Adjust User= and Group= to the user/group you want Tautulli to run as. # # - WantedBy= specifies which target (i.e. runlevel) to start Tautulli for. # multi-user.target equates to runlevel 3 (multi-user text mode) # graphical.target equates to runlevel 5 (multi-user X11 graphical mode) [Unit] Description=Tautulli - Stats for Plex Media Server usage Wants=network-online.target After=network-online.target [Service] ExecStart=/opt/Tautulli/Tautulli.py --config /opt/Tautulli/config.ini --datadir /opt/Tautulli --quiet --daemon --nolaunch GuessMainPID=no Type=forking User=tautulli Group=tautulli [Install] WantedBy=multi-user.target
  15. Hot to install Glances sudo apt-get install python-pip build-essential python-dev lm-sensors sudo pip install psutil logutils bottle batinfo https://bitbucket.org/gleb_zhulik/py3sensors/get/tip.tar.gz zeroconf netifaces pymdstat influxdb elasticsearch potsdb statsd pystache docker-py pysnmp pika py-cpuinfo bernhard sudo pip install glances Basic usage To start glances simply type glances in terminal. In glances you’ll see a lot of information about the resources of your system: CPU, Load, Memory, Swap Network, Disk I/O and Processes all in one page, by default the color code means: GREEN : the statistic is “OK” BLUE : the statistic is “CAREFUL” (to watch) VIOLET : the statistic is “WARNING” (alert) RED : the statistic is “CRITICAL” (critical) When Glances is running, you can press some special keys to give commands to it: c: Sort processes by CPU% m: Sort processes by MEM% p: Sort processes by name i: Sort processes by IO Rate d: Show/hide disk I/O stats f: Show/hide file system stats n: Show/hide network stats s: Show/hide sensors stats b: Bit/s or Byte/s for network IO w: Delete warning logs x: Delete warning and critical logs 1: Global CPU or Per Core stats h: Show/hide this help message q: Quit (Esc and Ctrl-C also work) l: Show/hide log messages Cpu , Ram , Swap Monitoring
  16. Step 1 — Installing OpenVPN To start, we will install OpenVPN on the server. We'll also install Easy RSA, a public key infrastructure management tool which will help us set up an internal certificate authority (CA) for use with our VPN. We'll also use Easy RSA to generate our SSL key pairs later on to secure the VPN connections. Log in to the server as the non-root sudo user, and update the package lists to make sure you have all the latest versions. sudo yum update -y The Extra Packages for Enterprise Linux (EPEL) repository is an additional repository managed by the Fedora Project containing non-standard but popular packages. OpenVPN isn't available in the default CentOS repositories but it is available in EPEL, so install EPEL: sudo yum install epel-release -y Then update your package lists once more: sudo yum update -y Next, install OpenVPN and wget, which we will use to install Easy RSA: sudo yum install -y openvpn wget Using wget, download Easy RSA. For the purposes of this tutorial, we recommend using easy-rsa-2 because there’s more available documentation for this version. You can find the download link for the latest version of easy-rsa-2 on the project’s Releases page: wget -O /tmp/easyrsa https://github.com/OpenVPN/easy-rsa-old/archive/2.3.3.tar.gz Next, extract the compressed file with tar: tar xfz /tmp/easyrsa This will create a new directory on your server called easy-rsa-old-2.3.3. Make a new subdirectory under /etc/openvpn and name it easy-rsa: sudo mkdir /etc/openvpn/easy-rsa Copy the extracted Easy RSA files over to the new directory: sudo cp -rf easy-rsa-old-2.3.3/easy-rsa/2.0/* /etc/openvpn/easy-rsa Then change the directory’s owner to your non-root sudo user: sudo chown sammy /etc/openvpn/easy-rsa/ Once these programs are installed and have been moved to the right locations on your system, the next step is to customize the server-side configuration of OpenVPN. Step 2 — Configuring OpenVPN Like many other widely-used open-source tools, there are dozens of configuration options available to you. In this section, we will provide instructions on how to set up a basic OpenVPN server configuration. OpenVPN has several example configuration files in its documentation directory. First, copy the sample server.conf file as a starting point for your own configuration file. sudo cp /usr/share/doc/openvpn-2.4.4/sample/sample-config-files/server.conf /etc/openvpn Open the new file for editing with the text editor of your choice. We’ll use nano in our example, which you can download with the yum install nano command if you don’t have it on your server already: sudo nano /etc/openvpn/server.conf There are a few lines we need to change in this file, most of which just need to be uncommented by removing the semicolon, ;, at the beginning of the line. The functions of these lines, and the other lines not mentioned in this tutorial, are explained in-depth in the comments above each one. To get started, find and uncomment the line containing push "redirect-gateway def1 bypass-dhcp". Doing this will tell your client to redirect all of its traffic through your OpenVPN server. Be aware that enabling this functionality can cause connectivity issues with other network services, like SSH: /etc/openvpn/server.conf push "redirect-gateway def1 bypass-dhcp" Because your client will not be able to use the default DNS servers provided by your ISP (as its traffic will be rerouted), you need to tell it which DNS servers it can use to connect to OpenVPN. You can pick different DNS servers, but here we'll use Google's public DNS servers which have the IPs of 8.8.8.8 and 8.8.4.4. Set this by uncommenting both push "dhcp-option DNS ..." lines and updating the IP addresses: /etc/openvpn/server.conf push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" We want OpenVPN to run with no privileges once it has started, so we need to tell it to run with a user and group of nobody. To enable this, uncomment the user nobody and group nobody lines: /etc/openvpn/server.conf user nobody group nobody Next, uncomment the topology subnet line. This, along with the server 10.8.0.0 255.255.255.0 line below it, configures your OpenVPN installation to function as a subnetwork and tells the client machine which IP address it should use. In this case, the server will become 10.8.0.1 and the first client will become 10.8.0.2: /etc/openvpn/server.conf topology subnet It’s also recommended that you add the following line to your server configuration file. This double checks that any incoming client certificates are truly coming from a client, hardening the security parameters we will establish in later steps: /etc/openvpn/server.conf remote-cert-eku "TLS Web Client Authentication" Lastly, OpenVPN strongly recommends that users enable TLS Authentication, a cryptographic protocol that ensures secure communications over a computer network. To do this, you will need to generate a static encryption key (named in our example as myvpn.tlsauth, although you can choose any name you like). Before creating this key, comment the line in the configuration file containing tls-auth ta.key 0 by prepending it with a semicolon. Then, add tls-crypt myvpn.tlsauth to the line below it: /etc/openvpn/server.conf ;tls-auth ta.key 0 tls-crypt myvpn.tlsauth Save and exit the OpenVPN server configuration file (in nano, press CTRL - X, Y, then ENTER to do so), and then generate the static encryption key with the following command: sudo openvpn --genkey --secret /etc/openvpn/myvpn.tlsauth Now that your server is configured, you can move on to setting up the SSL keys and certificates needed to securely connect to your VPN connection. Step 3 — Generating Keys and Certificates Easy RSA uses a set of scripts that come installed with the program to generate keys and certificates. In order to avoid re-configuring every time you need to generate a certificate, you can modify Easy RSA’s configuration to define the default values it will use for the certificate fields, including your country, city, and preferred email address. We’ll begin our process of generating keys and certificates by creating a directory where Easy RSA will store any keys and certs you generate: sudo mkdir /etc/openvpn/easy-rsa/keys The default certificate variables are set in the vars file in /etc/openvpn/easy-rsa, so open that file for editing: sudo nano /etc/openvpn/easy-rsa/vars Scroll to the bottom of the file and change the values that start with export KEY_ to match your information. The ones that matter the most are: KEY_CN: Here, enter the domain or subdomain that resolves to your server. KEY_NAME: You should enter server here. If you enter something else, you would also have to update the configuration files that reference server.key and server.crt. The other variables in this file that you may want to change are: KEY_COUNTRY: For this variable, enter the two-letter abbreviation of the country of your residence. KEY_PROVINCE: This should be the name or abbreviation of the state of your residence. KEY_CITY: Here, enter the name of the city you live in. KEY_ORG: This should be the name of your organization or company. KEY_EMAIL: Enter the email address that you want to be connected to the security certificate. KEY_OU: This should be the name of the “Organizational Unit” to which you belong, typically either the name of your department or team. The rest of the variables can be safely ignored outside of specific use cases. After you’ve made your changes, the file should look like this: /etc/openvpn/easy-rsa/vars . . . # These are the default values for fields # which will be placed in the certificate. # Don't leave any of these fields blank. export KEY_COUNTRY="US" export KEY_PROVINCE="NY" export KEY_CITY="New York" export KEY_ORG="DigitalOcean" export KEY_EMAIL="[email protected]" export [email protected] export KEY_CN=openvpn.example.com export KEY_NAME="server" export KEY_OU="Community" . . . Save and close the file. To start generating the keys and certificates, move into the easy-rsa directory and source in the new variables you set in the vars file: cd /etc/openvpn/easy-rsa source ./vars Run Easy RSA’s clean-all script to remove any keys and certificates already in the folder and generate the certificate authority: ./clean-all Next, build the certificate authority with the build-ca script. You'll be prompted to enter values for the certificate fields, but if you set the variables in the vars file earlier, all of your options will already be set as the defaults. You can press ENTER to accept the defaults for each one: ./build-ca This script generates a file called ca.key. This is the private key used to sign your server and clients’ certificates. If it is lost, you can no longer trust any certificates from this certificate authority, and if anyone is able to access this file they can sign new certificates and access your VPN without your knowledge. For this reason, OpenVPN recommends storing ca.key in a location that can be offline as much as possible, and it should only be activated when creating new certificates. Next, create a key and certificate for the server using the build-key-server script: ./build-key-server server As with building the CA, you'll see the values you’ve set as the defaults so you can hit ENTER at these prompts. Additionally, you’ll be prompted to enter a challenge password and an optional company name. If you enter a challenge password, you will be asked for it when connecting to the VPN from your client. If you don’t want to set a challenge password, just leave this line blank and press ENTER. At the end, enter Y to commit the changes. The last part of creating the server keys and certificates is generating a Diffie-Hellman key exchange file. Use the build-dh script to do this: ./build-dh This may take a few minutes to complete. Once your server is finished generating the key exchange file, copy the server keys and certificates from thekeys directory into the openvpn directory: cd /etc/openvpn/easy-rsa/keys sudo cp dh2048.pem ca.crt server.crt server.key /etc/openvpn Each client will also need a certificate in order for the OpenVPN server to authenticate it. These keys and certificates will be created on the server and then you will have to copy them over to your clients, which we will do in a later step. It’s advised that you generate separate keys and certificates for each client you intend to connect to your VPN. Because we'll only set up one client here, we called it client, but you can change this to a more descriptive name if you’d like: cd /etc/openvpn/easy-rsa ./build-key client Finally, copy the versioned OpenSSL configuration file, openssl-1.0.0.cnf, to a versionless name, openssl.cnf. Failing to do so could result in an error where OpenSSL is unable to load the configuration because it cannot detect its version: cp /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnf Now that all the necessary keys and certificates have been generated for your server and client, you can move on to setting up routing between the two machines. Step 4 — Routing So far, you’ve installed OpenVPN on your server, configured it, and generated the keys and certificates needed for your client to access the VPN. However, you have not yet provided OpenVPN with any instructions on where to send incoming web traffic from clients. You can stipulate how the server should handle client traffic by establishing some firewall rules and routing configurations. Assuming you followed the prerequisites at the start of this tutorial, you should already have firewalld installed and running on your server. To allow OpenVPN through the firewall, you’ll need to know what your active firewalld zone is. Find this with the following command: sudo firewall-cmd --get-active-zones Output trusted Interfaces: tun0 Next, add the openvpn service to the list of services allowed by firewalld within your active zone, and then make that setting permanent by running the command again but with the --permanent option added: sudo firewall-cmd --zone=trusted --add-service openvpn sudo firewall-cmd --zone=trusted --add-service openvpn --permanent You can check that the service was added correctly with the following command: sudo firewall-cmd --list-services --zone=trusted Output openvpn Next, add a masquerade to the current runtime instance, and then add it again with the --permanentoption to add the masquerade to all future instances: sudo firewall-cmd --add-masquerade sudo firewall-cmd --permanent --add-masquerade You can check that the masquerade was added correctly with this command: sudo firewall-cmd --query-masquerade Output yes Next, forward routing to your OpenVPN subnet. You can do this by first creating a variable (SHARK in our example) which will represent the primary network interface used by your server, and then using that variable to permanently add the routing rule: SHARK=$(ip route get 8.8.8.8 | awk 'NR==1 {print $(NF-2)}') Be sure to implement these changes to your firewall rules by reloading firewalld: sudo firewall-cmd --reload Next, enable IP forwarding. This will route all web traffic from your client to your server’s IP address, and your client’s public IP address will effectively be hidden. Open sysctl.conf for editing: sudo nano /etc/sysctl.conf Then add the following line at the top of the file: /etc/sysctl.conf net.ipv4.ip_forward = 1 Finally, restart the network service so the IP forwarding will take effect: sudo systemctl restart network.service With the routing and firewall rules in place, we can start the OpenVPN service on the server. Step 5 — Starting OpenVPN OpenVPN is managed as a systemd service using systemctl. We will configure OpenVPN to start up at boot so you can connect to your VPN at any time as long as your server is running. To do this, enable the OpenVPN server by adding it to systemctl: sudo systemctl -f enable [email protected] Then start the OpenVPN service: sudo systemctl start [email protected] Double check that the OpenVPN service is active with the following command. You should see active (running) in the output: sudo systemctl status [email protected] Output: We’ve now completed the server-side configuration for OpenVPN. Next, you will configure your client machine and connect to the OpenVPN server. Step 6 — Configuring a Client Regardless of your client machine's operating system, it will need a locally-saved copy of the CA certificate and the client key and certificate generated in Step 3, as well as the static encryption key you generated at the end of Step 2. Locate the following files on your server. If you generated multiple client keys with unique, descriptive names, then the key and certificate names will be different. In this article we used client. /etc/openvpn/easy-rsa/keys/ca.crt /etc/openvpn/easy-rsa/keys/client.crt /etc/openvpn/easy-rsa/keys/client.key /etc/openvpn/myvpn.tlsauth Copy these files to your client machine. You can use SFTP or your preferred method. You could even just open the files in your text editor and copy and paste the contents into new files on your client machine. Regardless of which method you use, be sure to note where you save these files. Next, create a file called client.ovpn on your client machine. This is a configuration file for an OpenVPN client, telling it how to connect to the server: sudo nano client.ovpn Then add the following lines to client.ovpn. Notice that many of these lines reflect those which we uncommented or added to the server.conf file, or were already in it by default: client.ovpn client tls-client ca /path/to/ca.crt cert /path/to/client.crt key /path/to/client.key tls-crypt /path/to/myvpn.tlsauth remote-cert-eku "TLS Web Client Authentication" proto udp remote your_server_ip 1194 udp dev tun topology subnet pull user nobody group nobody When adding these lines, please note the following: You'll need to change the first line to reflect the name you gave the client in your key and certificate; in our case, this is just client You also need to update the IP address from your_server_ip to the IP address of your server; port 1194 can stay the same Make sure the paths to your key and certificate files are correct This file can now be used by any OpenVPN client to connect to your server. Below are OS-specific instructions for how to connect your client: Windows: On Windows, you will need the official OpenVPN Community Edition binaries which come with a GUI. Place your .ovpn configuration file into the proper directory, C:\Program Files\OpenVPN\config, and click Connect in the GUI. OpenVPN GUI on Windows must be executed with administrative privileges. macOS: On macOS, the open source application Tunnelblick provides an interface similar to the OpenVPN GUI on Windows, and comes with OpenVPN and the required TUN/TAP drivers. As with Windows, the only step required is to place your .ovpn configuration file into the ~/Library/Application Support/Tunnelblick/Configurations directory. Alternatively, you can double-click on your .ovpn file. Linux: On Linux, you should install OpenVPN from your distribution's official repositories. You can then invoke OpenVPN by executing: sudo openvpn --config ~/path/to/client.ovpn After you establish a successful client connection, you can verify that your traffic is being routed through the VPN by checking Google to reveal your public IP. Conclusion You should now have a fully operational virtual private network running on your OpenVPN server. You can browse the web and download content without worrying about malicious actors tracking your activity. There are several steps you could take to customize your OpenVPN installation even further, such as configuring your client to connect to the VPN automatically or configuring client-specific rules and access policies. For these and other OpenVPN customizations, you should consult the official OpenVPN documentation. If you’re interested in other ways you can protect yourself and your machines on the internet, check out our article on 7 Security Measures to Protect Your Servers.
  17. Step 1 - Add Plex Repository The first step we need to do for this guide is to add the Plex repository to our CentOS 7 system. Go to the 'yum.repos.d' directory and create new repo file 'plex.repo' using the vim editor. cd /etc/yum.repos.d/ vim plex.repo Paste the following Plex repository configuration there. # Plex.repo file will allow dynamic install/update of plexmediaserver. [PlexRepo] name=PlexRepo baseurl=https://downloads.plex.tv/repo/rpm/$basearch/ enabled=1 gpgkey=https://downloads.plex.tv/plex-keys/PlexSign.key gpgcheck=1 Save and exit. Plex repository has been added to the CentOS 7 system. Step 2 - Install Plex Media Server on CentOS 7\8 Now we will install Plex media server on our CentOS server. Run the yum command below. sudo yum -y install plexmediaserver After the installation is complete, start the plex service and enable it to launch everytime at system boot using the systemctl commands below. systemctl start plexmediaserver systemctl enable plexmediaserver Plex media server has been installed - check it using the following command. systemctl status plexmediaserver And you will get the result as shown below. The Plex Media Server is now running on the CentOS 7 server. Step 2 - remove Plex Media Server on CentOS 7\8 To completely remove the Plex Media Server from the computer, first make sure the Plex Media Server is not running. Then do the following: Run the command rpm -e plexmediaserver Remove the directory /var/lib/plexmediaserver/ Run the command userdel plex Step 3 - Configure Firewalld Rules for Plex Media Server In this tutorial, we will enable Firewalld services. Make sure firewalld packages are installed on the system. Or you can install them using the yum command below. sudo yum -y install firewalld Now start the firewalld service and enable it to launch every time at system boot. systemctl start firewalld systemctl enable firewalld Next, we need to add new firewalld configuration for our plex installation. Plex media server needs some port in the 'LISTEN' state, so we will create new firewalld XML configuration. Go to the '/etc/firewalld/service' directory and create a new service firewalld configuration 'plex.xml' using vim. cd /etc/firewalld/services/ vim plexmediaserver.xml There, paste the following configuration. <?xml version="1.0" encoding="utf-8"?> <service> <short>plexmediaserver</short> <description>Ports required by plexmediaserver.</description> <port protocol="tcp" port="32400"></port> <port protocol="udp" port="1900"></port> <port protocol="tcp" port="3005"></port> <port protocol="udp" port="5353"></port> <port protocol="tcp" port="8324"></port> <port protocol="udp" port="32410"></port> <port protocol="udp" port="32412"></port> <port protocol="udp" port="32413"></port> <port protocol="udp" port="32414"></port> <port protocol="tcp" port="32469"></port> </service> Save and exit. Now add the 'plexmediaserver' service to the firewalld services list, then reload the configuration. sudo firewall-cmd --add-service=plexmediaserver --permanent sudo firewall-cmd --reload And you will get the result as below. The plexmediaserver service has been added to firewalld - check it using the firewalld command below. firewall-cmd --list-all And you should get 'plexmediaserver' on service list. Step 4 - Configure Plex Media Server Before configuring the Plex media server, make sure you have an account for Plex. If not, you can register using the URL below. https://app.plex.tv/ And then login to your account. If you're a registered user and logged in with your browser, you can open your Plex media server installation url in the following way changing the IP to your server IP. http://192.168.33.10:32400/web/ And you will be redirected to the plex login as below. Click the 'SIGN IN' button.
  18. Install Samba4 in CentOS 7 1. First install Samba4 and required packages from the default CentOS repositories using the yum package manager tool as shown. # yum install samba samba-client samba-common Install Samba4 on CentOS 7 2. After installing the samba packages, enable samba services to be allowed through system firewall with these commands. # firewall-cmd --permanent --zone=public --add-service=samba # firewall-cmd --reload Open Samba on Firewalld Check Windows Machine Workgroup Settings 3. Before you proceed to configure samba, make sure the Windows machine is in the same workgroup to be configured on the CentOS server. There are two possible ways to view the Windows machine workgroup settings: Right clicking on “This PC” or “My Computer” → Properties → Advanced system settings → Computer Name. Check Windows WorkGroup Alternatively, open the cmd prompt and run the following command, then look for “workstation domain” in the output as shown below. >net config workstation Verify Windows WorkGroup Configuring Samba4 on CentOS 7 4. The main samba configuration file is /etc/samba/smb.conf, the original file comes with pre-configuration settings which explain various configuration directives to guide you. But, before configuring samba, I suggest you to take a backup of the default file like this. # cp /etc/samba/smb.conf /etc/samba/smb.conf.orig Then, proceed to configure samba for anonymous and secure file sharing services as explained below. Samba4 Anonymous File Sharing 5. First create the shared directory where the files will be stored on the server and set the appropriate permissions on the directory. # mkdir -p /srv/samba/anonymous # chmod -R 0775 /srv/samba/anonymous # chown -R nobody:nobody /srv/samba/anonymous Also, you need to change the SELinux security context for the samba shared directory as follows. # chcon -t samba_share_t /srv/samba/anonymous Create Samba Shared Directory 6. Next, open the samba configuration file for editing, where you can modify/add the sections below with the corresponding directives. # vi /etc/samba/smb.conf Samba Configuration Settings [global] workgroup = WORKGROUP netbios name = centos security = user [Anonymous] comment = Anonymous File Server Share path = /srv/samba/anonymous browsable =yes writable = yes guest ok = yes read only = no force user = nobody 7. Now verify current samba settings by running the command below. # testparm Verify Samba Current Configuration Settings Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[homes]" Processing section "[printers]" Processing section "[print$]" Processing section "[Anonymous]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions # Global parameters [global] netbios name = centos printcap name = cups security = USER idmap config * : backend = tdb cups options = raw [homes] comment = Home Directories browseable = No inherit acls = Yes read only = No valid users = %S %D%w%S [printers] comment = All Printers path = /var/tmp browseable = No printable = Yes create mask = 0600 [print$] comment = Printer Drivers path = /var/lib/samba/drivers create mask = 0664 directory mask = 0775 write list = root [Anonymous] comment = Anonymous File Server Share path = /srv/samba/anonymous force user = nobody guest ok = Yes read only = No 8. Finally, start and enable samba services to start automatically at next boot and also apply the above changes to take effect. # systemctl enable smb.service # systemctl enable nmb.service # systemctl start smb.service # systemctl start nmb.service Testing Anonymous Samba File Sharing 9. Now on the Windows machine, open “Network” from a Windows Explorer window, then click on the CentOShost, or else try to access the server using its IP address (use ifconfig command to get IP address). e.g. \\192.168.43.168. Shared Network Hosts 10. Next, open the Anonymous directory and try to add files in there to share with other users. Samba Anonymous Share Add Files to Samba Anonymous Share Setup Samba4 Secure File Sharing 11. First start by creating a samba system group, then add users to the group and set a password for each user like so. # groupadd smbgrp # usermod tecmint -aG smbgrp # smbpasswd -a tecmint 12. Then create a secure directory where the shared files will be kept and set the appropriate permissions on the directory with SELinux security context for the samba. # mkdir -p /srv/samba/secure # chmod -R 0770 /srv/samba/secure # chown -R root:smbgrp /srv/samba/secure # chcon -t samba_share_t /srv/samba/secure 13. Next open the configuration file for editing and modify/add the section below with the corresponding directives. # vi /etc/samba/smb.conf Samba Secure Configuration Settings [Secure] comment = Secure File Server Share path = /srv/samba/secure valid users = @smbgrp guest ok = no writable = yes browsable = yes 14. Again, verify the samba configuration settings by running the following command. $ testparm Verify Secure Configuration Settings Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[homes]" Processing section "[printers]" Processing section "[print$]" Processing section "[Anonymous]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions # Global parameters [global] netbios name = centos printcap name = cups security = USER idmap config * : backend = tdb cups options = raw [homes] comment = Home Directories browseable = No inherit acls = Yes read only = No valid users = %S %D%w%S [printers] comment = All Printers path = /var/tmp browseable = No printable = Yes create mask = 0600 [print$] comment = Printer Drivers path = /var/lib/samba/drivers create mask = 0664 directory mask = 0775 write list = root [Anonymous] comment = Anonymous File Server Share path = /srv/samba/anonymous force user = nobody guest ok = Yes read only = No [Secure] comment = Secure File Server Share path = /srv/samba/secure read only = No valid users = @smbgrp 15. Restart Samba services to apply the changes. # systemctl restart smb.service # systemctl restart nmb.service Testing Secure Samba File Sharing 16. Go to Windows machine, open “Network” from a Windows Explorer window, then click on the CentOS host, or else try to access the server using its IP address. e.g. \\192.168.43.168. You’ll be asked to provide your username and password to login the CentOS server. Once you have entered the credentials, click OK. Samba Secure Login 17. Once you successfully login, you will see all the samba shared directories. Now securely share some files with other permitted users on the network by dropping them in Secure directory.
  19. navigate to the following location. /etc/sysconfig/network-scripts/ in this location you will find your NIC file. modify the file with your editor of choice. BOOTPROTO=dhcp To: BOOTPROTO=static Now you'll need to add the entries to set not only the IP address, but the netmask, gateway, and DNS addresses. At the bottom of that file, add the following: IPADDR=192.168.1.200 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=1.0.0.1 DNS2=1.1.1.1 DNS3=8.8.4.4 Save the file restart networking sudo systemctl restart network
  20. Run the following: curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
  21. Plex is a free feature-rich media library platform that provides a way to store all your movies, shows, and other media in one place. You can access Plex from any device, whether you’re at home or on-the-go. There are many different media tools available in the world like, Kodi, Xmbc, OSMC and Mediatomb, but the Plex Media Server is perhaps one of the most popular solutions for managing media. Plex runs on Windows, macOS, Linux, FreeBSD and many more. Plex is a client-server media player system made up from two main components, 1) The Plex Media Server, which organizes music, photos and videos content from personal media libraries and streams it to their player, 2) The Players that can be the Plex web UI, Plex Apps or Plex home theater. Plex Media Server supports Chromecast, Amazon FireTV, Android, iOS, Xbox, PlayStation, Apple TV, Roku, Android TV and various types of smart TVs. If you are looking for a way to watch your movies from anywhere, then Plex is best choice for you. In this tutorial, we will learn how to install and configure Plex Media Server on Ubuntu 16.04. Requirements A server running Ubuntu 22.04 A not-root user with sudo privileges setup on your server. A static IP address setup on your server. Getting Started Before starting, make sure your system is fully up to date by running the following command: sudo apt-get update -y sudo apt-get upgrade -y Once your system is updated, restart your system to apply all these changes with the following command: sudo reboot After restarting, log in with sudo user and proceed to the next step. 1. Install Plex Media Server First, you will need to download the latest version of the Plex from their official website. You can download it by running the following command: wget https://downloads.plex.tv/plex-media-server/1.7.5.4035-313f93718/plexmediaserver_1.7.5.4035-313f93718_amd64.deb Once Plex is downloaded, run the following command to install Plex: sudo dpkg -i plexmediaserver_1.7.5.4035-313f93718_amd64.deb Next, start Plex Media Server and enable it to start on boot time by running the following command: sudo systemctl start plexmediaserver sudo systemctl enable plexmediaserver You can check the status of Plex Media Server at any time by running the following command: sudo systemctl status plexmediaserver You should see the following output: ? plexmediaserver.service - Plex Media Server for Linux Loaded: loaded (/lib/systemd/system/plexmediaserver.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-08-05 11:48:52 IST; 17s ago Main PID: 3243 (sh) CGroup: /system.slice/plexmediaserver.service ??3243 /bin/sh -c LD_LIBRARY_PATH=/usr/lib/plexmediaserver "/usr/lib/plexmediaserver/Plex Media Server" ??3244 /usr/lib/plexmediaserver/Plex Media Server ??3288 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resources/Plug-ins-313f93718/Framework.bundle/Contents/Resources/Versions/ Aug 05 11:49:04 Node1 systemd[1]: Started Plex Media Server for Linux. Aug 05 11:49:04 Node1 sh[3243]: Error in command line:the argument for option '--serverUuid' should follow immediately after the equal sign Aug 05 11:49:04 Node1 sh[3243]: Crash Uploader options (all are required): Aug 05 11:49:04 Node1 sh[3243]: --directory arg Directory to scan for crash reports Aug 05 11:49:04 Node1 sh[3243]: --serverUuid arg UUID of the server that crashed Aug 05 11:49:04 Node1 sh[3243]: --userId arg User that owns this product Aug 05 11:49:04 Node1 sh[3243]: --platform arg Platform string Aug 05 11:49:04 Node1 sh[3243]: --url arg URL to upload to Aug 05 11:49:04 Node1 sh[3243]: --help show help message Aug 05 11:49:04 Node1 sh[3243]: --version arg Version of the product Next, you will need to create a directory to store your Plex media. You can create this by running the following command: sudo mkdir -p /root/plex/movie Or if you already have shares on your server, skip this step Once you are finished, you can proceed to the next step. 2. Configure Plex Now, all the components are installed on your system, it's time to configure and access Plex. Open your web browser and type the URL http://your-ip:32400/web, login and follow the setup wizard. Congratulations! your Plex Media Server is ready, you are now ready to connect to it from your Plex client application or Web browser.
  22. To disable ipv6, you have to open /etc/sysctl.conf using any text editor and insert the following lines at the end: net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 If ipv6 is still not disabled, then the problem is that sysctl.conf is still not activated. To solve this, open a terminal(Ctrl+Alt+T) and type the command, sudo sysctl -p You will see this in the terminal: net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 After that, if you run: cat /proc/sys/net/ipv6/conf/all/disable_ipv6 It will report: 1 If you see 1, ipv6 has been successfully disabled.
  23. Command Line Partitioning You'll be using "fdisk" to accomplish this. Refer back to the logical name you noted from earlier. For illustration, I'll use /dev/sdb, and assume that you want a single partition on the disk, occupying all the free space. If the number of cylinders in the disk is larger than 1024 (and large hard drives always have more), it could, in certain setups, cause problems with: software that runs at boot time (e.g., old versions of LILO) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Otherwise, this will not negatively affect you. 1) Initiate fdisk with the following command: sudo fdisk /dev/sdb 2) Fdisk will display the following menu: Command (m for help): m <enter> Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): 3) We want to add a new partition. Type "n" and press enter. Command action e extended p primary partition (1-4) 4) We want a primary partition. Enter "p" and enter. Partition number (1-4): 5) Since this will be the only partition on the drive, number 1. Enter "1" and enter. Command (m for help): If it asks about the first cylinder, just type "1" and enter. (We are making 1 partition to use the whole disk, so it should start at the beginning.) 6) Now that the partition is entered, choose option "w" to write the partition table to the disk. Type "w" and enter. The partition table has been altered! 7) If all went well, you now have a properly partitioned hard drive that's ready to be formatted. Since this is the first partition, Linux will recognize it as /dev/sdb1, while the disk that the partition is on is still /dev/sdb. Command Line Formatting To format the new partition as ext3 file system (best for use under Ubuntu): sudo mkfs -t ext3 /dev/sdb1 To format the new partition as fat32 file system (best for use under Ubuntu & Windows): sudo mkfs -t fat32 /dev/sdb1 As always, substitute "/dev/sdb1" with your own partition's path. Modify Reserved Space (Optional) When formatting the drive as ext2/ext3, 5% of the drive's total space is reserved for the super-user (root) so that the operating system can still write to the disk even if it is full. However, for disks that only contain data, this is not necessary. NOTE: You may run this command on a fat32 file system, but it will do nothing; therefore, I highly recommend not running it. You can adjust the percentage of reserved space with the "tune2fs" command, like this: sudo tune2fs -m 1 /dev/sdb1 This example reserves 1% of space - change this number if you wish. Using this command does not change any existing data on the drive. You can use it on a drive which already contains data. Create A Mount Point Now that the drive is partitioned and formatted, you need to choose a mount point. This will be the location from which you will access the drive in the future. I would recommend using a mount point with "/media", as it is the default used by Ubuntu. For this example, we'll use the path "/media/mynewdrive" sudo mkdir /media/mynewdrive Now we are ready to mount the drive to the mount point. Mount The Drive You can choose to have the drive mounted automatically each time you boot the computer, or manually only when you need to use it. Automatic Mount At Boot Note: Ubuntu now recommends to use UUID instead, see the instructions here:https://help.ubuntu....unity/UsingUUID You'll need to edit /etc/fstab: gksu gedit /etc/fstab or in terminal: sudo nano -Bw /etc/fstab Note: https://help.ubuntu....b#Editing_fstab Add this line to the end (for ext3 file system): /dev/sdb1 /media/mynewdrive ext3 defaults 0 2 Add this line to the end (for fat32 file system): /dev/sdb1 /media/mynewdrive vfat defaults 0 2The defaults part may allow you to read, but not write. To write other partition and FAT specific options must be used. If gnome nautilus is being used, use the right-click, mount method, from computer folder. Then launch the mount command from terminal, no options. The last entry should be the FAT drive and and look something like: /dev/sda5 on /media/mynewdrive type vfat (rw,nosuid,nodev,uhelper=hal,shortname=mixed,uid=1000,utf8,umask=077,flush)All of the parts between the parenthesis are the mount options and should replace "defaults" in the fstab file. The "2" at the end instructs your system to run a quick file system check on the hard drive at every boot. Changing it to "0" will skip this. Run 'man fstab' for more info here. You can now run "sudo mount -a" (or reboot the computer) to have the changes take effect. If you want to allow a normal user to create files on this drive, you can either give this user ownership of the top directory of the drive filesystem: (replace USERNAME with the username) sudo chown -R USERNAME:USERNAME /media/mynewdrive or in a more flexible way, practical if you have several users, allow for instance the users in the plugdev group (usually those who are meant to be able to mount removable disks, desktop users) to create files and sub-directories on the disk: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive The last "chmod +t" adds the sticky bit, so that people can only delete their own files and sub-directories in a directory, even if they have write permissions to it (see man chmod). Manually Mount Alternatively, you may want to manually mount the drive every time you need it. For manual mounting, use the following command: sudo mount /dev/sdb1 /media/mynewdrive When you are finished with the drive, you can unmount it using: sudo umount /media/mynewdrive
  24. Here is an example of how to use rysnc to copy data from a server to a remote server over SSH. rsync -av --progress /home/user/directory/ [email protected]:/home/user/directory/ rsync -av --progress /home/shares/data/ [email protected]: /home/shares/data
  25. Here is the command to clean cache from your server. sudo echo 3 | sudo tee /proc/sys/vm/drop_caches
×
×
  • Create New...