Linux is a branch of development of the old unix class of systems. Unix is not necessarily open and free. FOSS is what is classified as open and free software. Unix since its inception was deeply linked to specific industrial private interests, let’s not forget all this while we examine the use of linux by left minded activists. FOSS is nice and cool, but it is nearly 99.99% run on non-open and non-free hardware. A-political proposals of crowd-funding and diy construction attempts have led to ultra-expensive idealist solutions reserved for the very few and the eccentric affluent experimenters

Linux vs Windows is cool and trendy, is it? Really is it alone containing any political content? If there is such what is it? So let’s examine it from the base.

FOSS, People, as small teams or individuals “producing as much as they can and want” offering what they produced to be shared, used, and modified by anyone, or “as much as they need”. This is as much of a communist system of production and consumption as we have experienced in the entirety of modern history. No exchange what so ever, collective production according to ability and collective consumption according to need.

BUT we have corporations, some of them mega-corps, multinationals who nearly monopolize sectors of computing markets, creating R&D departments specifically to produce and offer open and free code (or conditionally free). Why? Firstly because other idiots will join their projects and offer further development (labor), contribute to their projects, for “free”, but they still retain the leadership and ownership of the project. Somehow, using their code, without asking why they were willing to offer it in the first place, it is cool to use it as long as we can say we are anti/against/ms-win free.

Like false class consciousness we have fan-boys of IBM, Google, Facebook, Oracle, Qt, HP, Intel, AMD, … products against MS.

Back when unix would only run on enterprise ultra-expensive large scale systems and expensive workstations (remember Dec, Sun, Sgi, … workstations that were priced similarly to 2 brand new fast sportscars each) and the PC market was restricted to MS or the alternative Apple crap, people tried and tried to port forms of unix into a PC. Some really gifted hacking experts were able to achieve such marvels, but it was so specific to hardware that the examples couldn’t be generalized and utilized massively.

Suddenly this genious Finn and his friends devised a kernel that could make most PC hardware available work and unix with a linux kernel could boot and run.

IBM saw eventually a way back into the PC market it lost by handing dos out to the subcontractors (MS), and saw an opportunity to take over and steer this “project” by promoting RedHat. After 2 decades of behind the scenes guidance since the projected outcome was successful in cornering the market, IBM appeared to have bought RH.

Are we all still anti-MS and pro-IBM,google,Oracle,FB,Intel/AMD?

The bait thrown to dumb fish was an automated desktop that looked and behaved just like the latest MS-win edition.

What is the resistance?

Linus Trovalds and a few others who sign the kernel today make 6figure salaries ALL paid by a handful of computing giants that by offering millions to the foundation control what it does. Traps like rust, telemetry, … and other “options” are shoved daily into the kernel to satisfy the paying clients’ demands and wishes.

And we, in the left are fans of a multimilioner’s “team” against a “trilioner’s” team. This is not football or cricket, or F1. This is your data in the hands of multinationals and their fellow customer/agencies. Don’t forget which welfare system maintains the hierarchy of those industries whether the market is rosy or gray. Do I need to spell out the connection?

Beware of multinationals bearing gifts.

Yes there are healthier alternatives requiring a little more work and study to employ, the quick and easy has a “cost” even when it is FOSS.

.

  • m532@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago
    1. Unix is not linux
    2. All of this does not have anything to do with windows. What is this both-sides-bad liberalism? Windows is clearly so much worse at all of this it isn’t even worth talking about in this context.
    3. It’s not the fault of FOSS devs that many live in bourgeois dictatorships. Of course the bourgeioise will steal the code. Of course they will subvert the license.
    4. Are you expecting FOSS devs to somehow conjure their own chip fabs? That’s not how material conditions work.
  • Prologue7642@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t really get the point of this post. If you want to say that quite a lot of FOSS code is funded by huge corporations, then yeah, sure. Most people I would assume know that. But not really sure what that has to do with title, even if Linux is mostly run by corporations it is still much better than alternatives.

    Also, not really sure what you mean by traps like Rust and telemetry. There is no telemetry on Linux and the only reason why I can think of you included it is recent Go telemetry, which I don’t really get how it is relevant. With Rust, I also don’t get it, Rust wasn’t added because some company wanted it or whatever, it was added because it is a popular (and extremely loved) language that is suitable for kernel development. Not many people nowadays want to code in C.

    • iriyan@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      linux and unix were built on alternatives. If you don’t like a piece of code offered as a tool to do something you write something better and offer it/share it with others. So you as a user have a choice among similar tools. Even the most basic ones like gnu-utilities have busybox and other specific alternatives.

      The latest trend is to have NO-ALTERNATIVES, to get everyone to use 1 core system. So instead of diverging as a system (as some of the BSD-unix projects did) linux is showing a tendency to converge into one system (fedora,debian,arch) with little differences among them.

      You get corporate media publishing articles of the “top -ten” linux distributions, or “top-ten” desktops, all based on the very same edition of IBM software, no exception, as there is none. This is marketing and steering the public into a single direction. The question you should answer to yourself is why! Without somone spelling it out to you drawing the attention of 3 lettered agencies.

      • debased@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        okay i can totally see why you wouldn’t like linux as a whole becoming “one thing”, but what is your opinion on the growth of linux on the desktop? By far the biggest factor in my opinion that’s pushing people away (consumers as well as devs) is having to deal with so many different distros, packaging apps with different libraries on so many different systems. Having standards that aim to reduce that load can only be beneficial for the masses to adopt an objectively better operating system, even if not perfect, wouldn’t it ? i.e. the rise of appimages and flatpaks as a means to curb that issue is to me a good thing, even if not “the most optimal way of doing things”

        • Prologue7642@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I always actually wonder if that is an actual issue. Apart from some duplicate effort with things like packaging for different distros (which is something that distro maintainers do anyway) I don’t really get this point. For me, this only makes sense for proprietary packages and not for open source.

          Apart from some small differences in how you install packages, using most distros is basically the same.

          I am always confused by this point because I see it repeated everywhere, but never with a good argument supporting it.

          • FuckBigTech347@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I only ever see people who work on proprietary software make this argument. For FOSS this is a non-issue. If you have the source code available you can just compile it against the libs on your system and it will just work In most cases unless there was a major change in some lib’s API. And even then you can make some adjustments yourself to make it work. Distro maintainers tend to do this.

          • debased@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            For many admittedly smaller apps, it’s always a bit of a pain to have to install it manually because the dev simply gave up trying to package it for “the big 3” and distro maintainers can’t care about all small programs, although the current system works well enough for most programs.

            However i am not a developer, so i can’t speak firsthand about the difficulty of packaging and maintaining your app on different distros across years, and i’m not sure if the brunt of maintaining all these apps should fall onto distro maintainers.

            About users and using distros, i can agree that it’s roughly the same either way with the only real difference most of the time being “do you use apt or pacman to install packages”

            • Prologue7642@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              Fair enough, but I only see that for some niche projects. And at that point you are probably not a regular user and can do it yourself.

              There is an issue on the other side, if you only provide appimage/flatpak it is much less customizable. You can’t optimize your software for your CPU, you can’t mix and match what version of the libraries your software uses. Personally, I think it is always a good idea to provide a flatpak alternative for those that want it, but I don’t see it as a replacement for regular packaging.

              Edit: I would much rather see something like nix being used to describe the dependencies. That is in my opinion the best solution, which also allows you to more easily port it to other systems.

              • debased@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Ideally, it’d be good enough to simply have say, an appimage/flatpak and have the source code and then let distro maintainers/end users build it how they want/need to, i have had the pleasure of trying to get NVENC working in OBS under Debian 10 and that was a massive pain, due to both outdated nvidia-drivers, i had to recompile ffmpeg with the right flags and that would break after every update, the easiest way was to get an OBS flatpak that came prebuilt with it all IIRC I guess my problems with that were mainly because i used debian stable at that time, it’s probably not as much of a pain now that i’m on sid.

                I don’t know anything about Nix, i heard a lot of good about it and how it’s “all config files” or something but the prospect of learning a whole new world scares me, but i trust your judgment on that. I’ll stick to what i know on my boring ass debian sid :D

                • Prologue7642@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I would imagine that if you weren’t on Debian stable, it would be much better. From what I’ve seen, dealing with anything Nvidia on stable distros is pain.

                  I just recently started working with it and it is really nice. You have NixOS, where you can define basically everything with just nix config files. You want to run MPD on some port, sure just use add this option, and we will create config file and put it in right place. It is really easy to define your entire system with all the options in one place. I don’t think I’ve ever had to change anything in /etc I just need to change an option in my system config. I think something like this is probably the future of Linux.

                  Nix by itself is just a language that is used to configure things. You can do things like to define all the dependencies for your project with it, so it is easy to build by anyone with nix (which you can install basically anywhere). By doing it like this you can be sure all the dependencies are defined, so it is really easy to port the software to other distros even if you weren’t using Nix.

      • Prologue7642@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That just depends on what you use. There are loads of distros that allow you to use whatever you want. There are only so many ways you can do stuff, and it doesn’t make much sense to differentiate if you don’t have reason to. You have some genuinely diverging distros like NixOS that are significantly different.

        Not really sure what corporate media you read. In my experience, most of those are just a popularity contest. And usually there are non-corporate distros like arch, Debian, etc. And with desktops I mean I am not even sure there are ten desktop environments (at least with some reasonable amount of users).

      • Prologue7642@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The fact that it has word telemetry in it doesn’t mean it spies on you. CONFIG_WILCO_EC_TELEMETRY -> allows you to read telemetry from some chrome specific hardware CONFIG_INTEL_PMT_TELEMETRY -> allows you to access telemetry that intel platform monitoring provides CONFIG_INTEL_TELEMETRY -> allows you to configure telemetry and query some events from intel hardware.

        None of these options spy on you or do anything nefarious. It just means that you can have an application that queries some data from them, nothing more.

        Again, not sure what your issue with Rust is.

        And with IFS it is same as above, someone here already linked you an article on it.

        • Soviet Pigeon@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          And that is exactly the point. Working with many servers means that you have to collect data. How am I supposed to know when it’s time to replace something and so on? I remember my boss not wanting to spend money on Nagios (servers etc.) until one day everything blew up. No one could work for two days. After that the idiot finally spent money on a monitoring system and you could finally see when the RAID failed.

          • Prologue7642@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Exactly if something I want more telemetry in my system that is more easily accessible. I can’t imagine living for example without SMART.

  • silent_clash@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    The existence of enterprise contributors to Linux is symbiotic with volunteer devs and helps drive development. There are benefits to having full time talented devs and engineers paid for their time working on Linux, and for the most part, the whole community is better for it.

    • Soviet Pigeon@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      There is no telemetry in the sense, that something is sendes to Intel. Look here. And this is quite hand, if you use Linux in a datacenter

      • iriyan@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        not on 5.10 but most 6.xx kernel do have telemetry and very few distros disable it, if possible.

        • Soviet Pigeon@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Show me the code, where something is beeing send to Intel. Not, that a module is loaded. Telemetry has also the meaning, that you collected data of your assets in a datacenter. I couldnt find anything in the code of Intel-PMT.

    • iriyan@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      1 year ago

      rust and maybe go, in a way evade what open and free code really meant (which contains the characteristic of being self contained). Many rust written software demand to the minute release of dependencies, automatically drawn and utilized while you compile the piece of software. First there is no way you can audit this then at any given moment this drawn code can change affecting what you compiled, exponentially making it difficult to audit and certify as secure. It also transfers the responsibility to 2nd and 3rd parties of what the code contains, making it legally impossible for being responsible or being accused of creating back-doors and other weaknesses in software.

      But it is modern and it is being pushed everywhere. In general, when you hear buzz words and terms, and technologies, making noise and be utilized everywhere be ware of the trojan.

      Facebook which had contributed 0 to the FOSS community, suddenly released zstd which they bought from someone (or so they say) and made him rich. This FOSS within months was incorporated and utilized all across the linux community on very false data supporting its superiority, like publishing comparative compression/decompression numbers of multi-thread software vs a mandated single thread on the competitor. At the end nobody really even used this optimized condition under which zstd has a tiny superiority in speed while still lacking in space (compression/decompression software).

      Someone and something drives this “rush”, like gold in Columbia river advertised by tool merchants for gold diggers.

      At least on the left we should have a bit more critical tendency than anti-windows fan boys clubs. The price you pay to have a usb stick automounted rw as a user automatically upon insertion is one of security and privacy. All this overhead instead of 5lines of script.

      • Prologue7642@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most of the code written nowadays isn’t self-contained. And basically it is impossible to do so. I mean, I guess you have some exceptions like the Linux Kernel itself and some low level utilities, but you use libraries and others people code everywhere. In that way, Rust is much better than most other options because it at least lets you pin your dependencies really easily. The idea that everyone who uses some code is auditing it is just ridiculous. You should be able to sure, and in some cases it might be a good idea to do so, at least for parts of your code. But if you are using Linux, did you audit the entire Linux source code? What about C standard libraries. Even just that would take a ridiculous amount of time.

        I would also argue that rust isn’t pushed everywhere, people just like it because it is a wonderful language. There are much more people who use it in their own projects than do it professionally for example.

        I could understand your argument if it was based on how Rust is run, what licenses it uses etc. But this is rather baffling to me. Basically the only thing you mention is the issue of statically linked vs. dynamically linked.

        With zstd again not really sure what you are even trying to say. That Facebook had impact on what is used? Ok, so? Zstd is completely open source and if someone decided to use it, that is up to them. I am pretty sure that every software I used that uses zstd also let me use another compression algorithms. And from what I found zstd in some cases is superior to alternatives, but feel free to provide sources, I am sure that I could be incorrect.

        • iriyan@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          yes, you always have some dependencies, even in the lowest form of linux utilities, there is a c library usually (glibc or musl) but the dependencies needed you choose and provide and are specific. Here we have a dynamic process that draws (not always but sometimes) the latest commit from someone’s git as a dependency, and a minute later I try to build the same, someone pushes a commit replacing the previous change, and my package builds as well. The two results are not identical, one may contain a backdoor, and we didn’t even notice a difference.

          When you build from glibc 2.3.4 and I build from the same, it IS the same.

          • Prologue7642@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Who says the distribution of glibc 2.3.4 you and I have are the same? It only depends on where you got it from. And even then we can build it with different flags etc. Not really sure how rust is worse in that one. On the contrary, usually when you build software in C/C++ you dynamically link. So you have no idea what version of libraries someone is using or where they got it. In that sense, Rust’s approach is actually safer.

            • iriyan@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Classic trash facebook kind of reaction, dismiss 99points of content so you can show off you can be right on a tiny detail of this one point, nearly irrelevant to the post.

              There is an official source for glibc, a specific edition is a specific edition, or a specific commit is what it is, you package it and use it stating the exact source and ver/commit. This DOES NOT change. You don’t necesseraly have such control in rust, someone else does. If you don’t have reproducability in FOSS it is not really FOSS, it is quick sand and blurry picture painting. If this one application later seems to have a bug you can’t really tell whether it is its own bag or some of its dependencies and you will not find 2 systems in the universe that can be identical, because of the crappy rust builds.

              Let’s get back on topic. Rust is interesting because it is an anormous amount of people working on it, have enormous support by coprorations, are proped by media hype (corporate media hype) … and just like in the recent past, hide behind a “non-profit” foundation, some large corporation has in its pocket to make it look like FOSS.

              Please try to think without reproducing corporate media hype served to the eternally dizzy.

              • Prologue7642@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                1 year ago

                Yes, there is an official source for glibc, but there are also official sources for every single rust package. And yes when you build rust application you pin your dependencies to specific version, you create lock files that even check sha of the source. It seems to me you have never actually used Rust to be honest. If you take your Rust application and build it on two machines using the same lock file, you will get the same result (at least in regard to dependencies).

                And even if it didn’t, what is your point? C libraries have literally zero mechanisms to control reproducibility. Anything Rust does is better in this way. Currently, there is almost no way to guarantee reproducibility in language. I really wonder why you only take an issue with Rust in this regard and not every C/C++.

                There is an argument for reproducibility, but in that case you should hold every project/language to the same standard.

                Let’s get back on topic. Rust is interesting because it is an anormous amount of people working on it, have enormous support by coprorations, are proped by media hype (corporate media hype) … and just like in the recent past, hide behind a “non-profit” foundation, some large corporation has in its pocket to make it look like FOSS.

                Rust is mainly interesting because it is a really nice language to work in. For example, it is most loved language in stackoverflow surveys for like 7 years in a row. There is a reason for that, it solves many issues of other languages (such as C/C++) while also being much easier to use and provides good performance. You do realize that, for example, lemmy is written in Rust? I would certainly not consider its developers to be part of corporate media.

                I would really like to see your sources for the fact that the only reason why Rust is used is corporate media hype. That is something you will have to proof for this argument to hold any weight.

                Your point about support by corporation is also not really relevant. Unfortunately, in our current capitalistic system, that is true for basically any programming language/project. There aren’t that many projects that have no corporate sponsors. You can have issues with how Rust is run, but that doesn’t mean that it is not FOSS. Give some concrete reasons why it is not FOSS other than Rust Foundation (which just holds things like trademarks etc.) has corporate sponsors.

                If you don’t like Rust, that is fine, but you should provide some evidence for your claims.

                • iriyan@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I don’t have to provide you with an ounce of energy to prove anything to you, when within an m-l community you see nothing of interest in the difference between 2-3 friends writing and providing FOSS and a multinational corporation that strives in selling systems and software providing FOSS. libudev-zero is FOSS, seatd is FOSS, systemd is just a maze of IBM’s control on FOSS.

                  Why isn’t s6 receiving any corporate support? By far it is the most superior init system we know. It is portable everywhere and 101% reliable. It has no “will not fix” bugs larking for years. As far as we know s6 has only had a personal offer for the developer to become a highly paid google executive/employee (probably with a huge gallera of programmers in his command). To which offer he wrote a public NO response.

                  Again, I feel really surprised having to explain such things in such a community, which means that theory and ideology is trapped in a little compartment labeled “political philosophy” hobby, in all other matters we are just day to day liberals with tax-right outs.

                  Poor Pol Pot must be rolling in his grave, as a figure of speech since decomposing organic matter doesn’t move much. His revolver may still be useful though to a blue collar worker or industrial farmer.

    • iriyan@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      wilco intel and possibly hidden amd There is also this INTEL IFS which is pushed as “good telemetry” or telemetry you want, as a super -enterprise admin to know when to replace equipment.

      https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config

      Many of those things didn’t exist in pre-6 editions, they have crawled up dew to pressure by manufacturers. The current 6.xx kernels are more than double of what 5.10-lts was and nearly double of 5.15-lts … Much of the firmware included is not even under production but alpha/beta versions of hardware under testing by manufacturers.

      What do users commonly do? Seek to have the latest and newest published, without reading release and changelogs ever. “Continuous development and modern equipment and code are always better.”

      Critical abilities are characteristics of “toxic personalities”, another capitalist buzz-word incorporated “not-critically” by the masses.

      • Soviet Pigeon@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I dont really understand your point. What is so bad about those telemetry drivers? Thay have to be loaded and there is no use for them for simple users.

        • iriyan@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          when telemetry is enabled it is not the user utilizing it but a manufacturer drawing data from the user’s machine.

          • debased@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Genuine question then; do the distros using these kernels disable these telemetry upon installing them, should you tick the “no telemetry pls” options during the installation process ?

        • iriyan@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          What is so bad about allowing a large corporation to voluntarily draw data out of your system? For one it is very much against the fundamentals and principles unix and foss were based. One the earlier days selling point for FOSS was to assure the user there was no “telemetry”. One the other extreme the public through android and mac/os have been conditioned to allow telemetry pretty much with every application they ever installed.

          • debased@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Couldn’t you just as well recompile your own kernel without these telemetry issues, with Gentoo for example ? The fact that you can do this and can’t at all with Windows is a pretty big factor to me.

            • iriyan@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              For sure FOSS is night/day improvement over closed non-free binary blobs, for all we know Win 11 may be linux in drag to look like windows. But the anti-MS-win identity is too short sighted for people on the left at least. How often do you see similar behaviors among linuxers be for/against intel/amd, when in the recent and more distant past, they have both be caught red-handed from forcing backdoor systems into the market discovered long after and silenced by the corpororate press. One of them a few years ago was shortly adopted into the linux kernel before it was flushed. speck is one I readily remember. It was nsa code google suggested it is added to the kernel.

              • Prologue7642@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I would say that especially leftist Linux users are against every corporation. But there are reasons why Linux users prefer AMD to Nvidia. They are more FOSS friendly. If you want to argue for free hardware, I 100% agree, but unfortunately that is basically impossible nowadays.

                If you want to be alarmed about what your computer is doing, I would much more worry about things like Intel management engine and AMDs version of that. Or the binary-blobs that you load just to use these devices. Those are actual issues that we should be focused on.

          • Soviet Pigeon@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            It is beeing used in datacenters. You load the module, you can aggregate your dara and then visualise it.

            This as an example. I couldnt find anything how something is beeing sendet to Intel.

            This is kind of telemetry which is usefull: The collected monitoring data is exposed to user-space via a new XML format for interested tools to parse.

            And this for those wilco stuff. Also look here. I could not found anything, that it is going to be send to a corporation.

            What is wrong if i have a bunch of servers and I want to collect telemetry data for them? I can collect them whereever I want.

            • iriyan@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              if you can collect them others may be able to as well, and if there is a way to collect one thing this is a vehicle to access other things. As Snowden says, before his activism it was conspiracy theory, but the world “did change” since then. Or did it?

              • Prologue7642@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I mean, sure, but that is true for literally every single info on your computer. If you can read data from these modules, you can read data from anything else. You can read entire memory, query your file system, do basically anything you want. At that point, the issue of whether someone can query the capabilities of your intel CPU is not something I would worry about.

                • iriyan@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  I can read from my machine, but I can only read from your machine in the same lan only if you have telemetry on. If I can read yours and this lan is on a wider net then “it is possible” that someone across the universe can as well.

              • Soviet Pigeon@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                No they cant. What are you talking about? Its like an agent writing specific things to /var/log. Telemetry means, that data is collected for your usage. Not for corporations. Intel PMT gives you the ability to access data. You can collect them and do stuff with them.

                Have ever seen software which helps you to see how your assets are doing? Look at Nagios. This thing collects data from your assets and visualise it. And this is a kind of telemetry. With Nagios you can see when its time to replace your harddisks on a server because the SMART values are bad. This is all telemetry. And here you have a possibility in a driver to access certain stuff. No stupid workarounds, direct access. Access which is under your control. Have you ever seen a datacenter or worked somewhere, where you have to manage a bunch of servers? You can check every instance one by one or simply collect data and see whats going on.

                You are simply misunderstanding the word “telemetry”.

                • iriyan@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  First of all, for people on the left as the community states, the use of running datacenters and enterprise networks is minor and rare (unless you are an admin of the party or federation of unions headquarters). This means the machine has the ability to serve data to others, to the network, and to the admin collecting it. Telemetry is a way for machines to passively allow another to collect data. Any chance this can be exploited? Why have it if your intention is a sole user/admin of a single machine?

                  With the complexities of a self regulated system as systemd such abilities can’t be controlled or audited by a user, but look at what most users of linux have. The collaboration of all those subsystems doing such things are expanding the surface of a machine’s presence on any network to be exploited.

                  For non-industrial use no telemetry is needed or should be allowed. But you pick up on a detail of what the original post is aiming to state to discredit it on a technicality that is meaningless. There are hundreds of parts of a linux system where such discussion can be exploited.

                  The point is DO NOT let your anti-windows rhetoric blind and confuse users that this is an easy and safe alternative that provides security, privacy, and other goodies, when 99% choose windows that is just as automated and “user friendly” as windows.

                  You tell me if your average linux user (especially those using gnome and plasma) know where, how, and why to disable kernel modules. Whether those modules are optionally disabled, enabled, included in the kernel, or awaiting someone to trigger them. Look at forums and boards, people mess up their boot-loader or fstab and their ms-win reaction is to format the disk and reinstall something like ubuntu.