Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

  • 1 Post
  • 8 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle
  • Technically, yes. But colloquially, when we’re talking about “analytics” we mean embedded 3rd party trackers that feed to Google or another outside entity. Those are embedded much deeper in the application and track things much more invasively such as how long you hover over certain links, how you move your cursor around the screen, your viewport size, browser fingerprinting, and more.

    The analytics I’m utilizing and referring to here are passive in that they’re collected anyway as part of the standard logging that happens when you access the webserver which is also part of our basic security posture. They’re not as granular or invasive but can still give you useful information about what parts of your site people use the most, how many clicks it takes a visitor to get from the homepage to where they want to be (by following the IP, URI, and seeing where that ends), how many visitors the site gets per day/week/month/etc, and such.


  • Logging is standard practice if you give even the slightest damn about security (read: you should), so I don’t see it as a problem. It’s what you use those logs for, how long they’re retained, and whether you sell them off.

    So as long as you’re only using them for security auditing and website analytics and don’t keep them forever and don’t plan to sell them to data brokers, there’s really nothing to fret over. A good place to disclose how you use the logs, how long you retain them, and what is logged is in the site’s privacy policy.


  • I do the occasional website for local businesses, and I never add any analytics code/trackers. One: they rarely ever ask. And two: the one time someone did ask for it, they never once logged into it or asked for trends. Three: I’d prefer not to unless they demand it.

    However, since I’m actually hosting the website for them, I can get decent heat maps from the access logs since they have the IP (which can be roughly geo-located), which URI’s are accessed (and those map to pages, and pages map to products/services), how often those are accessed, which page linked them to it or if they came directly to it (by checking the referrer header), which are most accessed (by count of the URI in the logs), and whether they’re accessing the site from desktop or mobile (via the user agent header). That can also be combined with any data from their “Contact us” form.

    One reason they’ve probably never asked for it is because I provide a quarterly report for them using that passive data, and they seem happy with it.




  • Is there a community about Matrix on Lemmy?

    !matrix@programming.dev

    Is Matrix technically part of the fediverse?

    I would say no. It doesn’t use ActivityPub and is its own thing. It’s federated in that indepedent Matrix servers can talk to each other (like email or Nextcloud). So while email would be considered a federated service, it’s not considered part of the fediverse. At most, it’s like a 2nd cousin.

    Who is the developer/team and do they have an active presence on the fediverse?

    Matrix.org foundation (https://matrix.org/) and not sure. Maybe some of the individual contributors do, but I don’t know any off the top of my head




  • Yes, I still run my own email server. It is not for the faint of heart, but once it’s configured and your IP reputation is clean, it’s mostly smooth sailing. I have not had any deliverability problems to date, initial setup/learning period notwithstanding.

    If you’re not scared away yet, here are some specific challenges you’ll face:

    • SMTP ports are typically blocked by many providers as a spam prevention measure. Hosting on a residential connection is often a complete non-starter and is becoming more difficult on business class connections as well (at least in the US, anyway).
    • If you plan to host in a VPS, good luck getting a clean IPv4 address. Most are on one or more public blacklists and likely several company-specific ones (cough Microsoft cough). I spent about 2 weeks getting my new VPS’s IP reputation cleaned up before I migrated from the old VPS.
    • Uptime: You need to have a reliable hosting solution with minimal power/server/network downtime.
    • Learning Curve: Email is not just one technology; it’s several that work together. So in a very basic email server, you will have Postfix as your MTA, Dovecot as your MDA, some kind of spam detection and filtering (e.g. SpamAssassin), some kind of antivirus to scan messages/attachments (e.g. Clamd), message signing (DKIM), user administration/management, webmail, etc. You’ll need to get all of these configured and operating in harmony.
    • Spam prevention standards: You’ll need to know how to work with DNS and create/manage all of the appropriate records on your domain (MX, SPF, DMARC, DKIM records, etc). All of these are pretty much required in 2023 in order for messages from your server to reach your recipient.
    • Keeping your IP reputation clean: This is an ongoing challenge if you host for a lot of people. It can only take one or two compromised accounts to send a LOT of spam and land your IP/IP block on a blacklist.
    • Keeping up with new standards: When I set my mail server up, DMARC and DKIM weren’t required by most recipient servers. Around 2016, I had to bolt on OpenDKIM to my email stack otherwise my messages ended up in the recipient’s spam folder. -Contingency Plan: One day you may just wake up and decide it’s too much to keep managing your own email server. I’m not there yet, but I’ve already got a plan in place to let a bigger player take over when the time comes.