Jump to content

brent

Administrators
  • Posts

    78
  • Joined

  • Last visited

  • Days Won

    1

brent last won the day on November 22 2019

brent had the most liked content!

brent's Achievements

Explorer

Explorer (4/14)

  • Dedicated
  • Week One Done
  • One Month Later
  • One Year In
  • Conversation Starter

Recent Badges

5

Reputation

  1. You can remove the PPA from the sources list where these PPAs are stored. PPA repositories are store in the form of PPA_Name.list. Use the following command to see all the PPAs added in your system: ls /etc/apt/sources.list.d
  2. Installation instructions are located https://linuxiac.com/nala-apt-command-frontend/
  3. Run the following commands to identify what your DNS settings are. Ubuntu 22.04 resolvectl status | grep "DNS Server" -A2 or Ubuntu 20.04 or Older systemd-resolve --status | grep 'DNS Servers' -A2
  4. Open super user bash sudo bash Go to the NextCloud folder: cd /var/www/nextcloud Run the following command: sudo -u www-data php occ db:add-missing-indices And finally the prompt should show something like this: Check indices of the share table. Adding additional parent index to the share table, this can take some time… Share table updated successfully. Adding additional mtime index to the filecache table, this can take some time… Filecache table updated successfully. And that's all, the problem should being fixed
  5. To remove the “You do not have a valid subscription for this server” popup message while logging in, run the command bellow. You’ll need to SSH to your Proxmox server or use the node console through the PVE web interface. If you have issues and need to revert changes please check the instructions at the bottom of this page. When you update your Proxmox server and the update includes the proxmox-widget-toolkit package, you’ll need to complete this modification again. This modification works with versions 5.1 and newer, tested up to the version shown in the title. Run the following one line command and then clear your browser cache (depending on the browser you may need to open a new tab or restart the browser): sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service Manual Steps Here are alternative step by step instructions so you can understand what the above command is doing: 1. Change to working directory cd /usr/share/javascript/proxmox-widget-toolkit 2. Make a backup cp proxmoxlib.js proxmoxlib.js.bak 3. Edit the file nano proxmoxlib.js 4. Locate the following code (Use ctrl+w in nano and search for “No valid subscription”) Ext.Msg.show({ title: gettext('No valid subscription'), 5. Replace “Ext.Msg.show” with “void” void({ //Ext.Msg.show({ title: gettext('No valid subscription'), 6. Restart the Proxmox web service (also be sure to clear your browser cache, depending on the browser you may need to open a new tab or restart the browser) systemctl restart pveproxy.service Additional Notes You can quickly check if the change has been made: grep -n -B 1 'No valid sub' proxmoxlib.js You have three options to revert the changes: Manually edit proxmoxlib.js to undo the changes you made Restore the backup file you created from the proxmox-widget-toolkit directory: mv proxmoxlib.js.bak proxmoxlib.js Reinstall the proxmox-widget-toolkit package from the repository: apt-get install --reinstall proxmox-widget-toolkit
  6. If you remove a server and it errors saying storage "images" or what ever you name your storage doesn't exist. do the following. Navigate to your storage and ensure the image is gone. Check your storage: My storage location is /mnt/wd-external/images/images Next navigate to /etc/pve/qemu-server remove the server UUID.conf file.
  7. If you are running a webserver, Database, or plex server you might find that your memory is at 90%+ all the time. Here is how to clear it up. 1. create a file at /usr/local/bin. I am calling it free-memory but you can call the file what ever you want. sudo nano /usr/local/bin/free-memory 3. add the command then save and close. free -h && sudo sysctl -w vm.drop_caches=3 && sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free -h 2. change the permissions of the script. sudo chmod 555 free-memory 4. Creating a cron job. sudo contrab -e 5. I want to run mine every minute. Change this to your needs. * * * * * /usr/local/bin/free-memory 6. check your server after a minute. you should see your memory usage change.
  8. Accessing a Gmail Account from Nextcloud Due to Google's security policies, accessing your Gmail account from Nextcloud requires additional steps. If you use two factor authentication, you'll need to generate an app password: Visit https://myaccount.google.com/apppasswords from a web browser At the bottom of the page, click the drop-down box labeled "Select app" Choose the option, "Other (Custom name)" Enter a descriptive name, such as "Nextcloud Mail" Click "Generate" Go back to Nextcloud (Mail/Rainloop), and enter your e-mail address and the app password you just generated Your Gmail account should now be accessible from within Nextcloud If you are not using two factor authentication, you'll need to allow "Less Secure Apps": Visit https://myaccount.google.com/lesssecureapps Toggle the radio button "Allow less secure apps" to the "ON" position Go back to Nextcloud (Mail/Rainloop), and enter your e-mail address and Google password Your Gmail account should now be accessible from within Nextcloud
  9. The Issue We want to disable snapd service on system startup/prevent snapd service startup automatically on system boot on Ubuntu or remove Snap completely. 1 Disable snap services 1.1 Bring up the terminal or login via SSH 1.2 Execute following commands to disable snap services sudo systemctl disable snapd.service sudo systemctl disable snapd.socket sudo systemctl disable snapd.seeded sudo systemctl disable snapd.snap-repair.timer 1.3 Restart the system sudo reboot 1.4 Now the snap service will not start on system startup 2 Removing Snap To uninstall snap (If necessary), we need to make sure snap is not used at all. If we want to uninstall/remove snap just follow the below steps 2.1 List all snaps snap list 2.2 If there is any installed snap pckage e.g. bashtop, remove all of them one by one sudo snap remove bashtop 2.3 Find the snap core service directory ID df From the output, under the “Mounted on” column, find the ones with “/snap/core/xxxx” 2.4 Unmount the snap core service sudo umount /snap/core/xxxx 2.5 Remove all snapd package sudo apt purge snapd 2.6 Remove all directories (If necessary), be careful with the rm command, we will lose data if done incorrectly rm -rf ~/snap sudo rm -rf /snap sudo rm -rf /var/snap sudo rm -rf /var/lib/snapd
  10. 1: Set your user as the owner chown -R joe /var/www/your-website.com/ This command sets joe as the owner of every file and folder inside the directory (-R stands for recursive). 2: set the web server as the group owner chgrp -R www-data /var/www/your-website.com/ This command sets www-data as the group owner of every file and folder inside the directory. Recursive mode, as above. 3: 750 permissions for everything chmod -R 750 /var/www/your-website.com/ The third command sets the permissions: read, write and execute (7) for the owner (i.e, you), read and execute (5) for the group owner (i.e, the web server0, zero permissions at all (0) for others. Once again this is done on every file and folder in the directory, recursively. 4: new files and folder inherit group ownership from parent folder chmod g+s /var/www/your-website.com/ The last command makes all files/folders created within the directory to automatically take on the group ownership of the parent folder, that is your web server. The S flags is a special mode that represents the setuid/setgid. In simple words, new files and directories created by the web server will have the same group ownership of your-website.com/ folder, which we set to www-data with the second command. When the web server needs to write If you have folders that need to be writable by the web server, you can just modify the permission values for the group owner so that www-data has write access. Run this command on each writable folder: chmod g+w /var/www/your-website.com/<writable-folder> For security reasons apply this only where necessary and not on the whole website directory.
  11. APCu APCu is a data cache, and it is available in most Linux distributions. On Red Hat/CentOS/Fedora systems install php-pecl-apcu. On Debian/Ubuntu/Mint systems install php-apcu. After restarting your Web server, add this line to your config.php file: 'memcache.local' => '\OC\Memcache\APCu', Refresh your Nextcloud admin page, and the cache warning should disappear.
  12. To enable preview for files in nextcloud, you need to install “Preview Generator” from next cloud app store https://apps.nextcloud.com/apps/previewgenerator To install login to nextcloud as admin. From right drop down menu, click + Apps link. Once on Apps page, you can use the search button on right side to search for "Preview Generator" and install it. You need to install some additional software, on ubuntu/debian install it with sudo apt install libreoffice ffmpeg imagemagick ghostscript Now edit config/config.php file of your nextcloud installation, add following code 'enable_previews' => true, 'preview_libreoffice_path' => '/usr/bin/libreoffice', 'enabledPreviewProviders' => array ( 0 => 'OC\\Preview\\TXT', 1 => 'OC\\Preview\\MarkDown', 2 => 'OC\\Preview\\OpenDocument', 3 => 'OC\\Preview\\PDF', 4 => 'OC\\Preview\\MSOffice2003', 5 => 'OC\\Preview\\MSOfficeDoc', 6 => 'OC\\Preview\\PDF', 7 => 'OC\\Preview\\Image', 8 => 'OC\\Preview\\Photoshop', 9 => 'OC\\Preview\\TIFF', 10 => 'OC\\Preview\\SVG', 11 => 'OC\\Preview\\Font', 12 => 'OC\\Preview\\MP3', 13 => 'OC\\Preview\\Movie', 14 => 'OC\\Preview\\MKV', 15 => 'OC\\Preview\\MP4', 16 => 'OC\\Preview\\AVI', ), For more info on configuration, check nextcloud documenation. Generate Preview for existing files Lets generate thumbnail for exiisting files, for this, i enabled shell access for www-data so preview files have proper file ownership (not owned by root). chsh --shell /bin/bash www-data Now change to www-data user su - www-data Now run /usr/bin/php /var/www/nextcloud/occ preview:generate-all -vvv Autogenerate Previews for new files set a cronjob as user www-data crontab -e -u www-data */5 * * * * /usr/bin/php /var/www/nextcloud/occ preview:pre-generate > /dev/null 2>&1
  13. Letsencrypt Auto Renew Testing: Though this part is optional but I recommand you to test your auto-renew cron script for errors. It will be a disaster if your Letsencrypt Certificate does not renew before expire due to some error. Basic Testing using --dry-run: For error checking we’ll perform certbot renew --dry-run or path/location/certbot-auto renew --dry-run ——- a process in which the auto-renew script will be executed without actually renewing the certificates. Execute the following lines on your Linux terminal, sudo -i certbot renew --dry-run && apache-restart-command testing using --force-renew In this advance testing section we’ll simulate the letsencrypt auto certificate renewal process by using –force-renew command. As you already know that the certbot renew command only take action if your certificate has less than 30 days. But if we use it with “–force-renew” command then your certificate get renewed immediately. Remember that, you only can renew 5 certificates per week for a particular domain or subdomain. Note the date of your current certificate To view the current expire date of your let’s encrypt certificate, execute the following command on your terminal. sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem Check if renewal was successful Now, Lets again check the let’s encrypt certificate’s expire date, sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem
  14. Node exporter is the best way to collect all the Linux server related metrics and statistics for monitoring. Monitor Linux Servers Using Prometheus In this guide, you will learn how to setup Prometheus node exporter on a Linux server to export all node level metrics to the Prometheus server. Before You Begin Prometheus Node Exporter needs Prometheus server to be up and running. If you would like to setup Prometheus, please see the Port 9100 opened in server firewall as Prometheus reads metrics on this port. Setup Node Exporter Binary Step 1: Download the latest node exporter package. You should check the Prometheus downloads section for the latest version and update this command to get that package. wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz Step 2: Unpack the tarball tar -xvf node_exporter-0.18.1.linux-amd64.tar.gz Step 3: Move the node export binary to /usr/local/bin sudo mv node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin/ Create a Custom Node Exporter Service Step 1: Create a node_exporter user to run the node exporter service. sudo useradd -rs /bin/false node_exporter Step 2: Create a node_exporter service file under systemd. sudo vi /etc/systemd/system/node_exporter.service Step 3: Add the following service file content to the service file and save it. [Unit] Description=Node Exporter After=network.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter [Install] WantedBy=multi-user.target Step 4: Reload the system daemon and star the node exporter service. sudo systemctl daemon-reload sudo systemctl start node_exporter Step 5: check the node exporter status to make sure it is running in the active state. sudo systemctl status node_exporter Step 6: Enable the node exporter service to the system startup. sudo systemctl enable node_exporter Now, node exporter would be exporting metrics on port 9100. You can see all the server metrics by visiting your server URL on /metrics as shown below. http://<server-IP>:9100/metrics Configure the Server as Target on Prometheus Server Now that we have the node exporter up and running on the server, we have to add this server a target on the Prometheus server configuration. Note: This configuration should be done on the Prometheus server. Step 1: Login to the Prometheus server and open the prometheus.yml file. sudo vi /etc/prometheus/prometheus.yml Step 2: Under the scrape config section add the node exporter target as shown below. Change 10.142.0.3 with your server IP where you have setup node exporter. Job name can be your server hostname or IP for identification purposes. - job_name: 'node_exporter_metrics' scrape_interval: 5s static_configs: - targets: ['10.142.0.3:9100'] Step 3: Restart the prometheus service for the configuration changes to take place. sudo systemctl restart prometheus Now, if you check the target in prometheus web UI (http://<prometheus-IP>:9090/targets) , you will be able to see the status as shown below. Also, you can use the Prometheus expression browser to query for node related metrics. Following are the few key node metrics you can use to find its statistics. node_memory_MemFree_bytes node_cpu_seconds_total node_filesystem_avail_bytes rate(node_cpu_seconds_total{mode="system"}[1m]) rate(node_network_receive_bytes_total[1m])
  15. INSTALL PHP 7.4 1. Install epel repo and remi repo # dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y # dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm -y 2. Check php module list and Install PHP7.4 # dnf module list php # dnf module enable php:remi-7.4 -y 3. Install PHP and the Extensions # dnf install php php-cli php-common php-json php-xml php-mbstring php-mysqli php-zip php-intl Disable SElinux 1. in order to install PI-Hole you need to disable SElinux /etc/selinux/config 2. reboot the server. Disable Firewall (optional) 1. Disable the Firewall or configure firewall for Pi-hole. sudo systemctl stop firewalld sudo systemctl disable firewalld INSTALL PI-HOLE 1 . Download Install Pi-hole # git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole # cd "Pi-hole/automated install/" # sed -i "s/lighttpd\slighttpd-fastcgi//" basic-install.sh # chmod +x basic-install.sh # ./basic-install.sh Setting up Pi-hole as a recursive DNS server solution sudo dnf install unbound 1. backup file /etc/unbound/unbound.conf mv /etc/unbound/unbound.conf /etc/unbound/unbound.conf.bak 3. Create a new unbound.conf file nano /etc/unbound/unbound.conf 4. Add the following line and save. include: "/etc/unbound/unbound.conf.d/*.conf" 5. Create /etc/unbound/unbound.conf.d/pi-hole.conf: server: # If no logfile is specified, syslog is used # logfile: "/var/log/unbound/unbound.log" verbosity: 0 interface: 127.0.0.1 port: 5335 do-ip4: yes do-udp: yes do-tcp: yes # May be set to yes if you have IPv6 connectivity do-ip6: no # You want to leave this to no unless you have *native* IPv6. With 6to4 and # Terredo tunnels your web browser should favor IPv4 for the same reasons prefer-ip6: no # Use this only when you downloaded the list of primary root servers! # If you use the default dns-root-data package, unbound will find it automatically #root-hints: "/var/lib/unbound/root.hints" # Trust glue only if it is within the server's authority harden-glue: yes # Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS harden-dnssec-stripped: yes # Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes # see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details use-caps-for-id: no # Reduce EDNS reassembly buffer size. # IP fragmentation is unreliable on the Internet today, and can cause # transmission failures when large DNS messages are sent via UDP. Even # when fragmentation does work, it may not be secure; it is theoretically # possible to spoof parts of a fragmented DNS message, without easy # detection at the receiving end. Recently, there was an excellent study # >>> Defragmenting DNS - Determining the optimal maximum UDP response size for DNS <<< # by Axel Koolhaas, and Tjeerd Slokker (https://indico.dns-oarc.net/event/36/contributions/776/) # in collaboration with NLnet Labs explored DNS using real world data from the # the RIPE Atlas probes and the researchers suggested different values for # IPv4 and IPv6 and in different scenarios. They advise that servers should # be configured to limit DNS messages sent over UDP to a size that will not # trigger fragmentation on typical network links. DNS servers can switch # from UDP to TCP when a DNS response is too big to fit in this limited # buffer size. This value has also been suggested in DNS Flag Day 2020. edns-buffer-size: 1232 # Perform prefetching of close to expired message cache entries # This only applies to domains that have been frequently queried prefetch: yes # One thread should be sufficient, can be increased on beefy machines. In reality for most users running on small networks or on a single machine, it should be unnecessary to seek performance enhancement by increasing num-threads above 1. num-threads: 1 # Ensure kernel buffer is large enough to not lose messages in traffic spikes so-rcvbuf: 1m # Ensure privacy of local IP ranges private-address: 192.168.0.0/16 private-address: 169.254.0.0/16 private-address: 172.16.0.0/12 private-address: 10.0.0.0/8 private-address: fd00::/8 private-address: fe80::/10 Start your local recursive server and test that it's operational: sudo service unbound restart dig pi-hole.net @127.0.0.1 -p 5335 The first query may be quite slow, but subsequent queries, also to other domains under the same TLD, should be fairly quick. You should also consider adding edns-packet-max=1232 to a config file like /etc/dnsmasq.d/99-edns.conf to signal FTL to adhere to this limit. Test validation¶ You can test DNSSEC validation using dig sigfail.verteiltesysteme.net @127.0.0.1 -p 5335 dig sigok.verteiltesysteme.net @127.0.0.1 -p 5335 The first command should give a status report of SERVFAIL and no IP address. The second should give NOERROR plus an IP address. Configure Pi-hole¶ Finally, configure Pi-hole to use your recursive DNS server by specifying 127.0.0.1#5335 as the Custom DNS (IPv4): (don't forget to hit Return or click on Save) Disable resolvconf for unbound (optional) The unbound package can come with a systemd service called unbound-resolvconf.service and default enabled. It instructs resolvconf to write unbound's own DNS service at nameserver 127.0.0.1 , but without the 5335 port, into the file /etc/resolv.conf. That /etc/resolv.conf file is used by local services/processes to determine DNS servers configured. If you configured /etc/dhcpcd.conf with a static domain_name_servers= line, these DNS server(s) will be ignored/overruled by this service. To check if this service is enabled for your distribution, run below one and take note of the Active line. It will show either active or inactive or it might not even be installed resulting in a could not be found message: sudo systemctl status unbound-resolvconf.service To disable the service if so desire, run below two: sudo systemctl disable unbound-resolvconf.service sudo systemctl stop unbound-resolvconf.service To have the domain_name_servers= in the file /etc/dhcpcd.conf activated/propagate, run below one: sudo systemctl restart dhcpcd And check with below one if IP(s) on the nameserver line(s) reflects the ones in the /etc/dhcpcd.conf file: cat /etc/resolv.conf Add logging to unbound Warning It's not recommended to increase verbosity for daily use, as unbound logs a lot. But it might be helpful for debugging purposes. There are five levels of verbosity Level 0 means no verbosity, only errors Level 1 gives operational information Level 2 gives detailed operational information Level 3 gives query level information Level 4 gives algorithm level information Level 5 logs client identification for cache misses First, specify the log file and the verbosity level in the server part of /etc/unbound/unbound.conf.d/pi-hole.conf: server: # If no logfile is specified, syslog is used logfile: "/var/log/unbound/unbound.log" verbosity: 1 Second, create log dir and file, set permissions: sudo mkdir -p /var/log/unbound sudo touch /var/log/unbound/unbound.log sudo chown unbound /var/log/unbound/unbound.log Third, restart unbound: sudo service unbound restart
×
×
  • Create New...