Jump to content

All Activity

This stream auto-updates

  1. Earlier
  2. You can remove the PPA from the sources list where these PPAs are stored. PPA repositories are store in the form of PPA_Name.list. Use the following command to see all the PPAs added in your system: ls /etc/apt/sources.list.d
  3. Installation instructions are located https://linuxiac.com/nala-apt-command-frontend/
  4. Run the following commands to identify what your DNS settings are. Ubuntu 22.04 resolvectl status | grep "DNS Server" -A2 or Ubuntu 20.04 or Older systemd-resolve --status | grep 'DNS Servers' -A2
  5. Open super user bash sudo bash Go to the NextCloud folder: cd /var/www/nextcloud Run the following command: sudo -u www-data php occ db:add-missing-indices And finally the prompt should show something like this: Check indices of the share table. Adding additional parent index to the share table, this can take some time… Share table updated successfully. Adding additional mtime index to the filecache table, this can take some time… Filecache table updated successfully. And that's all, the problem should being fixed
  6. To remove the “You do not have a valid subscription for this server” popup message while logging in, run the command bellow. You’ll need to SSH to your Proxmox server or use the node console through the PVE web interface. If you have issues and need to revert changes please check the instructions at the bottom of this page. When you update your Proxmox server and the update includes the proxmox-widget-toolkit package, you’ll need to complete this modification again. This modification works with versions 5.1 and newer, tested up to the version shown in the title. Run the following one line command and then clear your browser cache (depending on the browser you may need to open a new tab or restart the browser): sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service Manual Steps Here are alternative step by step instructions so you can understand what the above command is doing: 1. Change to working directory cd /usr/share/javascript/proxmox-widget-toolkit 2. Make a backup cp proxmoxlib.js proxmoxlib.js.bak 3. Edit the file nano proxmoxlib.js 4. Locate the following code (Use ctrl+w in nano and search for “No valid subscription”) Ext.Msg.show({ title: gettext('No valid subscription'), 5. Replace “Ext.Msg.show” with “void” void({ //Ext.Msg.show({ title: gettext('No valid subscription'), 6. Restart the Proxmox web service (also be sure to clear your browser cache, depending on the browser you may need to open a new tab or restart the browser) systemctl restart pveproxy.service Additional Notes You can quickly check if the change has been made: grep -n -B 1 'No valid sub' proxmoxlib.js You have three options to revert the changes: Manually edit proxmoxlib.js to undo the changes you made Restore the backup file you created from the proxmox-widget-toolkit directory: mv proxmoxlib.js.bak proxmoxlib.js Reinstall the proxmox-widget-toolkit package from the repository: apt-get install --reinstall proxmox-widget-toolkit
  7. If you remove a server and it errors saying storage "images" or what ever you name your storage doesn't exist. do the following. Navigate to your storage and ensure the image is gone. Check your storage: My storage location is /mnt/wd-external/images/images Next navigate to /etc/pve/qemu-server remove the server UUID.conf file.
  8. If you are running a webserver, Database, or plex server you might find that your memory is at 90%+ all the time. Here is how to clear it up. 1. create a file at /usr/local/bin. I am calling it free-memory but you can call the file what ever you want. sudo nano /usr/local/bin/free-memory 3. add the command then save and close. free -h && sudo sysctl -w vm.drop_caches=3 && sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free -h 2. change the permissions of the script. sudo chmod 555 free-memory 4. Creating a cron job. sudo contrab -e 5. I want to run mine every minute. Change this to your needs. * * * * * /usr/local/bin/free-memory 6. check your server after a minute. you should see your memory usage change.
  9. Accessing a Gmail Account from Nextcloud Due to Google's security policies, accessing your Gmail account from Nextcloud requires additional steps. If you use two factor authentication, you'll need to generate an app password: Visit https://myaccount.google.com/apppasswords from a web browser At the bottom of the page, click the drop-down box labeled "Select app" Choose the option, "Other (Custom name)" Enter a descriptive name, such as "Nextcloud Mail" Click "Generate" Go back to Nextcloud (Mail/Rainloop), and enter your e-mail address and the app password you just generated Your Gmail account should now be accessible from within Nextcloud If you are not using two factor authentication, you'll need to allow "Less Secure Apps": Visit https://myaccount.google.com/lesssecureapps Toggle the radio button "Allow less secure apps" to the "ON" position Go back to Nextcloud (Mail/Rainloop), and enter your e-mail address and Google password Your Gmail account should now be accessible from within Nextcloud
  10. The Issue We want to disable snapd service on system startup/prevent snapd service startup automatically on system boot on Ubuntu or remove Snap completely. 1 Disable snap services 1.1 Bring up the terminal or login via SSH 1.2 Execute following commands to disable snap services sudo systemctl disable snapd.service sudo systemctl disable snapd.socket sudo systemctl disable snapd.seeded sudo systemctl disable snapd.snap-repair.timer 1.3 Restart the system sudo reboot 1.4 Now the snap service will not start on system startup 2 Removing Snap To uninstall snap (If necessary), we need to make sure snap is not used at all. If we want to uninstall/remove snap just follow the below steps 2.1 List all snaps snap list 2.2 If there is any installed snap pckage e.g. bashtop, remove all of them one by one sudo snap remove bashtop 2.3 Find the snap core service directory ID df From the output, under the “Mounted on” column, find the ones with “/snap/core/xxxx” 2.4 Unmount the snap core service sudo umount /snap/core/xxxx 2.5 Remove all snapd package sudo apt purge snapd 2.6 Remove all directories (If necessary), be careful with the rm command, we will lose data if done incorrectly rm -rf ~/snap sudo rm -rf /snap sudo rm -rf /var/snap sudo rm -rf /var/lib/snapd
  11. 1: Set your user as the owner chown -R joe /var/www/your-website.com/ This command sets joe as the owner of every file and folder inside the directory (-R stands for recursive). 2: set the web server as the group owner chgrp -R www-data /var/www/your-website.com/ This command sets www-data as the group owner of every file and folder inside the directory. Recursive mode, as above. 3: 750 permissions for everything chmod -R 750 /var/www/your-website.com/ The third command sets the permissions: read, write and execute (7) for the owner (i.e, you), read and execute (5) for the group owner (i.e, the web server0, zero permissions at all (0) for others. Once again this is done on every file and folder in the directory, recursively. 4: new files and folder inherit group ownership from parent folder chmod g+s /var/www/your-website.com/ The last command makes all files/folders created within the directory to automatically take on the group ownership of the parent folder, that is your web server. The S flags is a special mode that represents the setuid/setgid. In simple words, new files and directories created by the web server will have the same group ownership of your-website.com/ folder, which we set to www-data with the second command. When the web server needs to write If you have folders that need to be writable by the web server, you can just modify the permission values for the group owner so that www-data has write access. Run this command on each writable folder: chmod g+w /var/www/your-website.com/<writable-folder> For security reasons apply this only where necessary and not on the whole website directory.
  12. APCu APCu is a data cache, and it is available in most Linux distributions. On Red Hat/CentOS/Fedora systems install php-pecl-apcu. On Debian/Ubuntu/Mint systems install php-apcu. After restarting your Web server, add this line to your config.php file: 'memcache.local' => '\OC\Memcache\APCu', Refresh your Nextcloud admin page, and the cache warning should disappear.
  13. To enable preview for files in nextcloud, you need to install “Preview Generator” from next cloud app store https://apps.nextcloud.com/apps/previewgenerator To install login to nextcloud as admin. From right drop down menu, click + Apps link. Once on Apps page, you can use the search button on right side to search for "Preview Generator" and install it. You need to install some additional software, on ubuntu/debian install it with sudo apt install libreoffice ffmpeg imagemagick ghostscript Now edit config/config.php file of your nextcloud installation, add following code 'enable_previews' => true, 'preview_libreoffice_path' => '/usr/bin/libreoffice', 'enabledPreviewProviders' => array ( 0 => 'OC\\Preview\\TXT', 1 => 'OC\\Preview\\MarkDown', 2 => 'OC\\Preview\\OpenDocument', 3 => 'OC\\Preview\\PDF', 4 => 'OC\\Preview\\MSOffice2003', 5 => 'OC\\Preview\\MSOfficeDoc', 6 => 'OC\\Preview\\PDF', 7 => 'OC\\Preview\\Image', 8 => 'OC\\Preview\\Photoshop', 9 => 'OC\\Preview\\TIFF', 10 => 'OC\\Preview\\SVG', 11 => 'OC\\Preview\\Font', 12 => 'OC\\Preview\\MP3', 13 => 'OC\\Preview\\Movie', 14 => 'OC\\Preview\\MKV', 15 => 'OC\\Preview\\MP4', 16 => 'OC\\Preview\\AVI', ), For more info on configuration, check nextcloud documenation. Generate Preview for existing files Lets generate thumbnail for exiisting files, for this, i enabled shell access for www-data so preview files have proper file ownership (not owned by root). chsh --shell /bin/bash www-data Now change to www-data user su - www-data Now run /usr/bin/php /var/www/nextcloud/occ preview:generate-all -vvv Autogenerate Previews for new files set a cronjob as user www-data crontab -e -u www-data */5 * * * * /usr/bin/php /var/www/nextcloud/occ preview:pre-generate > /dev/null 2>&1
  14. Letsencrypt Auto Renew Testing: Though this part is optional but I recommand you to test your auto-renew cron script for errors. It will be a disaster if your Letsencrypt Certificate does not renew before expire due to some error. Basic Testing using --dry-run: For error checking we’ll perform certbot renew --dry-run or path/location/certbot-auto renew --dry-run ——- a process in which the auto-renew script will be executed without actually renewing the certificates. Execute the following lines on your Linux terminal, sudo -i certbot renew --dry-run && apache-restart-command testing using --force-renew In this advance testing section we’ll simulate the letsencrypt auto certificate renewal process by using –force-renew command. As you already know that the certbot renew command only take action if your certificate has less than 30 days. But if we use it with “–force-renew” command then your certificate get renewed immediately. Remember that, you only can renew 5 certificates per week for a particular domain or subdomain. Note the date of your current certificate To view the current expire date of your let’s encrypt certificate, execute the following command on your terminal. sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem Check if renewal was successful Now, Lets again check the let’s encrypt certificate’s expire date, sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem
  15. Node exporter is the best way to collect all the Linux server related metrics and statistics for monitoring. Monitor Linux Servers Using Prometheus In this guide, you will learn how to setup Prometheus node exporter on a Linux server to export all node level metrics to the Prometheus server. Before You Begin Prometheus Node Exporter needs Prometheus server to be up and running. If you would like to setup Prometheus, please see the Port 9100 opened in server firewall as Prometheus reads metrics on this port. Setup Node Exporter Binary Step 1: Download the latest node exporter package. You should check the Prometheus downloads section for the latest version and update this command to get that package. wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz Step 2: Unpack the tarball tar -xvf node_exporter-0.18.1.linux-amd64.tar.gz Step 3: Move the node export binary to /usr/local/bin sudo mv node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin/ Create a Custom Node Exporter Service Step 1: Create a node_exporter user to run the node exporter service. sudo useradd -rs /bin/false node_exporter Step 2: Create a node_exporter service file under systemd. sudo vi /etc/systemd/system/node_exporter.service Step 3: Add the following service file content to the service file and save it. [Unit] Description=Node Exporter After=network.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter [Install] WantedBy=multi-user.target Step 4: Reload the system daemon and star the node exporter service. sudo systemctl daemon-reload sudo systemctl start node_exporter Step 5: check the node exporter status to make sure it is running in the active state. sudo systemctl status node_exporter Step 6: Enable the node exporter service to the system startup. sudo systemctl enable node_exporter Now, node exporter would be exporting metrics on port 9100. You can see all the server metrics by visiting your server URL on /metrics as shown below. http://<server-IP>:9100/metrics Configure the Server as Target on Prometheus Server Now that we have the node exporter up and running on the server, we have to add this server a target on the Prometheus server configuration. Note: This configuration should be done on the Prometheus server. Step 1: Login to the Prometheus server and open the prometheus.yml file. sudo vi /etc/prometheus/prometheus.yml Step 2: Under the scrape config section add the node exporter target as shown below. Change 10.142.0.3 with your server IP where you have setup node exporter. Job name can be your server hostname or IP for identification purposes. - job_name: 'node_exporter_metrics' scrape_interval: 5s static_configs: - targets: ['10.142.0.3:9100'] Step 3: Restart the prometheus service for the configuration changes to take place. sudo systemctl restart prometheus Now, if you check the target in prometheus web UI (http://<prometheus-IP>:9090/targets) , you will be able to see the status as shown below. Also, you can use the Prometheus expression browser to query for node related metrics. Following are the few key node metrics you can use to find its statistics. node_memory_MemFree_bytes node_cpu_seconds_total node_filesystem_avail_bytes rate(node_cpu_seconds_total{mode="system"}[1m]) rate(node_network_receive_bytes_total[1m])
  16. INSTALL PHP 7.4 1. Install epel repo and remi repo # dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y # dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm -y 2. Check php module list and Install PHP7.4 # dnf module list php # dnf module enable php:remi-7.4 -y 3. Install PHP and the Extensions # dnf install php php-cli php-common php-json php-xml php-mbstring php-mysqli php-zip php-intl Disable SElinux 1. in order to install PI-Hole you need to disable SElinux /etc/selinux/config 2. reboot the server. Disable Firewall (optional) 1. Disable the Firewall or configure firewall for Pi-hole. sudo systemctl stop firewalld sudo systemctl disable firewalld INSTALL PI-HOLE 1 . Download Install Pi-hole # git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole # cd "Pi-hole/automated install/" # sed -i "s/lighttpd\slighttpd-fastcgi//" basic-install.sh # chmod +x basic-install.sh # ./basic-install.sh Setting up Pi-hole as a recursive DNS server solution sudo dnf install unbound 1. backup file /etc/unbound/unbound.conf mv /etc/unbound/unbound.conf /etc/unbound/unbound.conf.bak 3. Create a new unbound.conf file nano /etc/unbound/unbound.conf 4. Add the following line and save. include: "/etc/unbound/unbound.conf.d/*.conf" 5. Create /etc/unbound/unbound.conf.d/pi-hole.conf: server: # If no logfile is specified, syslog is used # logfile: "/var/log/unbound/unbound.log" verbosity: 0 interface: 127.0.0.1 port: 5335 do-ip4: yes do-udp: yes do-tcp: yes # May be set to yes if you have IPv6 connectivity do-ip6: no # You want to leave this to no unless you have *native* IPv6. With 6to4 and # Terredo tunnels your web browser should favor IPv4 for the same reasons prefer-ip6: no # Use this only when you downloaded the list of primary root servers! # If you use the default dns-root-data package, unbound will find it automatically #root-hints: "/var/lib/unbound/root.hints" # Trust glue only if it is within the server's authority harden-glue: yes # Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS harden-dnssec-stripped: yes # Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes # see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details use-caps-for-id: no # Reduce EDNS reassembly buffer size. # IP fragmentation is unreliable on the Internet today, and can cause # transmission failures when large DNS messages are sent via UDP. Even # when fragmentation does work, it may not be secure; it is theoretically # possible to spoof parts of a fragmented DNS message, without easy # detection at the receiving end. Recently, there was an excellent study # >>> Defragmenting DNS - Determining the optimal maximum UDP response size for DNS <<< # by Axel Koolhaas, and Tjeerd Slokker (https://indico.dns-oarc.net/event/36/contributions/776/) # in collaboration with NLnet Labs explored DNS using real world data from the # the RIPE Atlas probes and the researchers suggested different values for # IPv4 and IPv6 and in different scenarios. They advise that servers should # be configured to limit DNS messages sent over UDP to a size that will not # trigger fragmentation on typical network links. DNS servers can switch # from UDP to TCP when a DNS response is too big to fit in this limited # buffer size. This value has also been suggested in DNS Flag Day 2020. edns-buffer-size: 1232 # Perform prefetching of close to expired message cache entries # This only applies to domains that have been frequently queried prefetch: yes # One thread should be sufficient, can be increased on beefy machines. In reality for most users running on small networks or on a single machine, it should be unnecessary to seek performance enhancement by increasing num-threads above 1. num-threads: 1 # Ensure kernel buffer is large enough to not lose messages in traffic spikes so-rcvbuf: 1m # Ensure privacy of local IP ranges private-address: 192.168.0.0/16 private-address: 169.254.0.0/16 private-address: 172.16.0.0/12 private-address: 10.0.0.0/8 private-address: fd00::/8 private-address: fe80::/10 Start your local recursive server and test that it's operational: sudo service unbound restart dig pi-hole.net @127.0.0.1 -p 5335 The first query may be quite slow, but subsequent queries, also to other domains under the same TLD, should be fairly quick. You should also consider adding edns-packet-max=1232 to a config file like /etc/dnsmasq.d/99-edns.conf to signal FTL to adhere to this limit. Test validation¶ You can test DNSSEC validation using dig sigfail.verteiltesysteme.net @127.0.0.1 -p 5335 dig sigok.verteiltesysteme.net @127.0.0.1 -p 5335 The first command should give a status report of SERVFAIL and no IP address. The second should give NOERROR plus an IP address. Configure Pi-hole¶ Finally, configure Pi-hole to use your recursive DNS server by specifying 127.0.0.1#5335 as the Custom DNS (IPv4): (don't forget to hit Return or click on Save) Disable resolvconf for unbound (optional) The unbound package can come with a systemd service called unbound-resolvconf.service and default enabled. It instructs resolvconf to write unbound's own DNS service at nameserver 127.0.0.1 , but without the 5335 port, into the file /etc/resolv.conf. That /etc/resolv.conf file is used by local services/processes to determine DNS servers configured. If you configured /etc/dhcpcd.conf with a static domain_name_servers= line, these DNS server(s) will be ignored/overruled by this service. To check if this service is enabled for your distribution, run below one and take note of the Active line. It will show either active or inactive or it might not even be installed resulting in a could not be found message: sudo systemctl status unbound-resolvconf.service To disable the service if so desire, run below two: sudo systemctl disable unbound-resolvconf.service sudo systemctl stop unbound-resolvconf.service To have the domain_name_servers= in the file /etc/dhcpcd.conf activated/propagate, run below one: sudo systemctl restart dhcpcd And check with below one if IP(s) on the nameserver line(s) reflects the ones in the /etc/dhcpcd.conf file: cat /etc/resolv.conf Add logging to unbound Warning It's not recommended to increase verbosity for daily use, as unbound logs a lot. But it might be helpful for debugging purposes. There are five levels of verbosity Level 0 means no verbosity, only errors Level 1 gives operational information Level 2 gives detailed operational information Level 3 gives query level information Level 4 gives algorithm level information Level 5 logs client identification for cache misses First, specify the log file and the verbosity level in the server part of /etc/unbound/unbound.conf.d/pi-hole.conf: server: # If no logfile is specified, syslog is used logfile: "/var/log/unbound/unbound.log" verbosity: 1 Second, create log dir and file, set permissions: sudo mkdir -p /var/log/unbound sudo touch /var/log/unbound/unbound.log sudo chown unbound /var/log/unbound/unbound.log Third, restart unbound: sudo service unbound restart
  17. If the Red Hat Insights site is reflecting a different hostname simply run the following to make the insights-client check back in # insights-client --version
  18. Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system. Prerequisites To follow this tutorial, you will need the following: One Ubuntu 20.04 server. An account on Docker Hub if you wish to create your own images and push them to Docker Hub, as shown in Steps 7 and 8. Step 1 — Installing Docker The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package. First, update your existing list of packages: sudo apt update Next, install a few prerequisite packages which let apt use packages over HTTPS: sudo apt install apt-transport-https ca-certificates curl software-properties-common Then add the GPG key for the official Docker repository to your system: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" This will also update our package database with the Docker packages from the newly added repo. Make sure you are about to install from the Docker repo instead of the default Ubuntu repo: sudo apt-cache policy docker-ce You’ll see output like this, although the version number for Docker may be different: docker-ce: Installed: (none) Candidate: 5:19.03.9~3-0~ubuntu-focal Version table: 5:19.03.9~3-0~ubuntu-focal 500 500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 20.04 (focal). Finally, install Docker: sudo apt install docker-ce Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running: sudo systemctl status docker The output should be similar to the following, showing that the service is active and running: Output ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 24321 (dockerd) Tasks: 8 Memory: 46.4M CGroup: /system.slice/docker.service └─24321 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial. Step 2 — Executing the Docker Command Without Sudo (Optional) By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this: Output docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'. If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group: sudo usermod -aG docker ${USER} To apply the new group membership, log out of the server and back in, or type the following: su - ${USER} You will be prompted to enter your user’s password to continue. Confirm that your user is now added to the docker group by typing: groups Output sammy sudo docker If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using: sudo usermod -aG docker username The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.
  19. Standard Docker Setup If you already have Portainer installed, you’ll need to stop and remove it from your system before you upgrade the container. To do that, run this command: sudo docker stop portainer && sudo docker rm portainer You will probably be prompted for your sudo password. Enter that and then the system will remove the Portainer container, but it will NOT delete your Portainer data as we didn’t remove that. Next, you’ll want to pull the latest Portainer image: sudo docker pull portainer/portainer-ce:latest Once that is done, you’re ready to deploy the newest version of Portainer: sudo docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest Now you can go to http://your-server-address:9000 and login. Note: Doing this will NOT remove your other applications/containers/etc.
  20. want to run a cron job that should run a specific shell script /home/jobs/sync.cache.sh every minute. How do I use crontab to execute script every minute on Linux or Unix-like system? How can I run cron job every minute on Ubuntu Linux? Cron is one of the most useful tool in a Linux or UNIX like operating systems. It is usually used for sysadmin jobs such as backups or cleaning /tmp/ directories and more. Let us see how can we run cron job every one minute on Linux, *BSD and Unix-like systems. Run cron job every minute The syntax is: * * * * * /path/to/your/script To run a script called /home/vivek/bin/foo, type the crontab command: $ crontab -e Append the following job: * * * * * /home/vivek/bin/foo Save and close the file. How does it work? The syntax for crontab is as follows: * * * * * command to be executed - - - - - | | | | | | | | | ----- Day of week (0 - 7) (Sunday=0 or 7) | | | ------- Month (1 - 12) | | --------- Day of month (1 - 31) | ----------- Hour (0 - 23) ------------- Minute (0 - 59) The asterisk (*) operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month. An asterisk in the every field means run given command/script every minute. A note about using /etc/cron.d/ directory If you put cronjob in /etc/cron.d/ directory you must provide the username to run the task as in the task definition: * * * * * USERNAME /path/to/your/script For example, run a script that uses rsync to replicate changed files. Create a file named /etc/crond.d/rsync.job $ sudo vi /etc/crond.d/rsync.job Append the following: PATH=/sbin:/usr/sbin:/bin:/usr/bin # Start job every 1 minute * * * * * root /root/bin/replication.sh # Another example to set up cron job every 1 minute is a commonly used in cron schedule. * * * * * root /usr/bin/perl /usr/lib/cgi-bin/check.for.errors.cgi Save and close the file. Here is a sample /root/bin/replication.sh file: #!/bin/bash # Usage: A sample shell script to replicate newly added # HTML files/images/js etc on all $servers i.e. poor mans # file replication service ;) # # Author: Vivek Gite, under GPL v2.0+ # # Note: Set ssh pub key based auth to work this script # ------------------------------------------------------------ _rsync="/usr/bin/rsync" _rsync_opt='-az -H --delete --numeric-ids --exclude=cache/css --exclude=tmp/js' # user name for ssh u="vivek" # server nodes servers="node01 node02" # Source and dest S='/home/vivek/wwwfiles/' D='/home/vivek/wwwfiles' # Let us loop it and do it for b in ${servers} do ${_rsync} ${_rsync_opt} "$@" ${S} ${u}@${b}:${D} done A note about dealing with race condition when running cron job every minute We are going to use the flock command which manages flock(2) locks from within shell scripts or from the command line. Modify your script as follows to ensure only one instance of a Bash script is running every minute: #!/bin/bash ## Copyright (C) 2009 Przemyslaw Pawelczyk <przemoc@gmail.com> ## ## This script is licensed under the terms of the MIT license. ## Source https://gist.github.com/przemoc/571091 ## https://opensource.org/licenses/MIT # # Lockable script boilerplate ### HEADER ### LOCKFILE="/var/lock/`basename $0`" LOCKFD=99 # PRIVATE _lock() { flock -$1 $LOCKFD; } _no_more_locking() { _lock u; _lock xn && rm -f $LOCKFILE; } _prepare_locking() { eval "exec $LOCKFD>\"$LOCKFILE\""; trap _no_more_locking EXIT; } # ON START _prepare_locking # PUBLIC exlock_now() { _lock xn; } # obtain an exclusive lock immediately or fail exlock() { _lock x; } # obtain an exclusive lock shlock() { _lock s; } # obtain a shared lock unlock() { _lock u; } # drop a lock # Simplest example is avoiding running multiple instances of script. exlock_now || exit 1 ### BEGIN OF SCRIPT ### _rsync="/usr/bin/rsync" _rsync_opt='-az -H --delete --numeric-ids --exclude=cache/css --exclude=tmp/js' # user name for ssh u="vivek" # server nodes servers="node01 node02" # Source and dest S='/home/vivek/wwwfiles/' D='/home/vivek/wwwfiles' # Let us loop it and do it for b in ${servers} do ${_rsync} ${_rsync_opt} "$@" ${S} ${u}@${b}:${D} done ### END OF SCRIPT ### # Remember! Lock file is removed when one of the scripts exits and it is # the only script holding the lock or lock is not acquired at all.
  21. First, we will download the Node Exporter on all machines : check the download version available from here. wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz Extract the downloaded archive tar -xf node_exporter-1.2.2.linux-amd64.tar.gz Move the node_exporter binary to /usr/local/bin: sudo mv node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin Remove the residual files with: rm -r node_exporter-1.2.2.linux-amd64* Next, we will create users and service files for node_exporter. For security reasons, it is always recommended to run any services/daemons in separate accounts of their own. Thus, we are going to create an user account for node_exporter. We have used the -r flag to indicate it is a system account, and set the default shell to /bin/false using -s to prevent logins. sudo useradd -rs /bin/false node_exporter Then, we will create a systemd unit file so that node_exporter can be started at boot. sudo nano /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter [Install] WantedBy=multi-user.target Since we have created a new unit file, we must reload the systemd daemon, set the service to always run at boot and start it : sudo systemctl daemon-reload sudo systemctl enable node_exporter sudo systemctl start node_exporter sudo systemctl status node_exporter Configure UFW / Firewall Ubuntu : sudo ufw allow from 10.0.0.46 to any port 9100 sudo ufw status numbered
  22. Prometheus is an open-source system and service monitoring and alerting tool used for recording real-time services and collecting metrics in a time-series database. It s written in Go and licensed under the Apache 2 License originally developed by SoundCloud.In this tutorial, we will show you how to install Prometheus on Ubuntu 20.04 server, which can be done easily if you follow it step by step Requirements: For the purposes of this tutorial, we will use an Ubuntu20.04 VPS. Access to the root user account (or a user with sudo privileges) Step 1: Log in to the Server & Update the Server OS Packages First, log in to your Ubuntu 20.04 server via SSH as the root use ssh root@IP_ADDRESS -p PORT_NUMBER Don’t forget to replace IP_Address and Port_Number with your server’s actual IP address and the SSH port number. Also, you should replace ‘root’ with the username of the admin account if needed. Once you are in, run the following commands to update the package index and upgrade all installed packages to the latest available version. apt-get update apt-get upgrade Step 2. Creating Prometheus System Users and Directory The Prometheus server requires a service user account to run. You can name your user however you like, but we will create a user named prometheus. This user will be a system user (-r) who will be unable to get a shell (-s /bin/false) useradd --no-create-home -rs /bin/false prometheus Also, we need to create directories for configuration files and other Prometheus data. mkdir /etc/prometheus mkdir /var/lib/prometheus Now we will have to update the group and user ownership on the newly created directories. chown prometheus:prometheus /etc/prometheus chown prometheus:prometheus /var/lib/prometheus Step 3. Download Prometheus Binary File Prometheus is included by default on the Ubuntu 20.04 repositories. apt-cache policy prometheus prometheus: Installed: (none) Candidate: 2.15.2+ds-2 Version table: 2.15.2+ds-2 500 500 http://us.archive.ubuntu.com/ubuntu focal/universe amd64 Packages However, the Prometheus release version provided by the default Ubuntu repositories may not be up-to-date. At the time of writing this article, the latest stable version of Prometheus is 2.30.3. But before downloading, visit the official Prometheus downloads page and check if there is a new version available. You can download it using the following command: wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz Once the tarball is downloaded, verify the tarball checksum with the following command: sha256sum prometheus-2.30.3.linux-amd64.tar.gz You should see an output that looks similar to the one below: 1ccd386d05f73a98b69aa5e0ed31fffac95cd9dadf7df1540daf2f182c5287e2 prometheus-2.30.3.linux-amd64.tar.gz Compare the hash value from the above output to the checksum value on the Prometheus download page. If they match, that means the file’s integrity is validated. Now you have successfully downloaded the Prometheus file and now you will extract it to the /opt directory using the tar command: tar xvzf prometheus-2.30.3.linux-amd64.tar.gz -C /opt Next, you need to copy the binary files to /usr/local/bin directory and fix the permissions. This is done with the following commands: mv /opt/prometheus-2.30.3.linux-amd64/prometheus /opt/prometheus-2.30.3.linux-amd64/promtool /usr/local/bin/ chown prometheus:prometheus /usr/local/bin/prometheus /usr/local/bin/promtool Also, we need to copy the consoles and console_libraries directories to Prometheus configuration directory, /etc/prometheus mv /opt/prometheus-2.30.3.linux-amd64/consoles /opt/prometheus-2.30.3.linux-amd64/console_libraries /etc/prometheus/ chown -R prometheus:prometheus /etc/prometheus/consoles /etc/prometheus/console_libraries Step 4: Create Prometheus Configuration file Prometheus configuration file has been prepared and available on the extracted archive folder, and you need just to copy it to the Prometheus configuration /etc/prometheus directory. mv /opt/prometheus-2.30.3.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml chown prometheus:prometheus /etc/prometheus/prometheus.yml The content of the prometheus.yml file: # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: "prometheus" # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ["localhost:9090"] The configuration is set up to scrape every 15 seconds and Prometheus listens on port 9090. Linux server scrape example: global: scrape_interval: 1s scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['prometheus.linux-network.home:9090'] - targets: ['plex.linux-network.home:9100'] - targets: ['grafana.linux-network.home:9100'] - targets: ['NS1.linux-network.home:9100'] - targets: ['NS2.linux-network.home:9100'] - targets: ['WEB1.linux-network.home:9100'] - targets: ['DB1.linux-network.home:9100'] - targets: ['PVE2.linux-network.home:9100'] Step 5: Create Prometheus Systemd Service file Now we need to create a system service file. nano /etc/systemd/system/prometheus.service In that file, add the following content: [Unit] Description=Prometheus Wants=network-online.target After=network-online.target [Service] User=prometheus Group=prometheus Type=simple ExecStart=/usr/local/bin/prometheus \ --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries [Install] WantedBy=multi-user.target After adding the content save and close the file. To use the newly created service you will have to reload the daemon services, Use the below command to reload daemon services. systemctl daemon-reload You can now start and enable Prometheus service using the below commands: systemctl start prometheus systemctl enable prometheus To check and verify the status of your Prometheus service, run the following command: systemctl status prometheus Output : ● prometheus.service - Prometheus Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-23 19:15:21 UTC; 4s ago Main PID: 9884 (prometheus) Tasks: 1 (limit: 2245) Memory: 336.0K CGroup: /system.slice/prometheus.service └─9884 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries Prometheus installation and configuration is set up, You can see status Active: active(running) Now Prometheus service is up and running and you can access it from any web browser. http://Your_server_IP:9090 To check the status of your node, go to Status > Targets. That’s it! The installation of Prometheus on Ubuntu 20.04 has been completed.
  1. Load more activity
×
×
  • Create New...