• 1 Post
  • 53 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle






  • I used Kodi with LibreElec for years in a similar setup. It was nice… but in practice I didn’t really use the “cool” functionalities (like indexing, image preview, Web remote control, etc) so instead I checked how Kodi works and noticed DLNA. I saw that my favorite video player, namely VLC, supports DLNA. I then looking for DLNA server on Linux, found few and stuck to the simplest I found, namely minidlna. It’s quite basic, at the least the way I use it, but for my usage it’s enough :

    • install VLC on clients, including Android video projector, phones, XR HMDs, etc
    • install minidlna on server (RPi5)
    • configure minidlna to serve the right directory with subdirectories ( /var/lib/minidlna by default )
    • configure few extra software that get videos to push them (via scp script and ssh-key) to rpi5:/var/lib/minidlna/

    voila… very reliable setup (been using for more than a year on a daily basis.



  • What this show is a total lack of originality.

    AI is not new. Open-source is not new. Putting two well known concepts together wasn’t new either because… AI has historically been open. A lot of the cutting edge research is done in public laboratories, with public funding, and is published in journals (sadly often behind paywall but still).

    So the name and the concept are both unoriginal.

    A lot of the popularity gained from OpenAI by using a chatbot is not new either. Relying on always larger dataset and benefiting from Moore’s law is not new either.

    So I’m not standing on any side, neither this person nor the corporation.

    I find that claiming to be “owning” common ideas is destructive for most.







  • I’d happily give technical advice but first I need to understand the actual need.

    I don’t mean “what would be cool” but rather what’s the absolute minimum basic that would make a solution acceptable.

    Why do I insist so much? Well because installing a distribution, e.g. Debian, takes less than 1h. Assuming you have a separate /home directory, there is no need to “copy” anything, only mounting correctly. If it is on another physical computer then the speed will depend on the your storage capacity and hardware (e.g. SSD vs HDD). Finally “configuring” each piece of software will take a certain amount of time, especially if you didn’t save the configuration (which should be the case).

    Anyway, my point being that :

    • installing the OS takes little time
    • copying data across physical devices take a lot more time
    • configuring manually specific software takes a bit of time

    So, if you repeat the operation several times a week, investing time to find a solution can be useful. If you do this few times a year or less, it’s probably NOT actually efficient.

    So, again, is this an intellectual endeavor, for the purpose of knowing what an "ideal’ scenario would be or is it a genuine need?


  • utopiah@lemmy.mltoLinux@lemmy.mlPrinting on Linux
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    HP Laser 107w, driverless, over LAN.

    I just Ctrl+P from any software and it prints.

    It also prints programmatically (for e.g. folk.computer ) thanks to IPP.

    I didn’t have to “think about printing” since I have that setup so I don’t know where you get that sentiment.



  • As per usual, in order to understand what it means we need to see :

    • performance benchmark (A100 level? H100? B100? GB200 setups?)
    • energy consumption (A100 performance level and H100 lower watt? the other way around?)
    • networking scalability (how many cards cards can be interconnected for distributed compute? NVLink equivalents?)
    • software stack (e.g can it run CUDA and if not what alternatives can be used?)
    • yield (how many die are usable, i.e. can it be commercially viable or is it R&D still?)
    • price (which regardless of possible subsidies would come from yield)
    • volume (how many cards can actually be bought, also dependent on yield)

    Still interesting to read after announcements, as per usual, and especially who will actually manufacture them at scale (SMIC? TSMC?).


  • It’s a classic BigTech marketing trick. They are the only one able to build “it” and it doesn’t matter if we like “it” or not because “it” is coming.

    I believed in this BS for longer than I care to admit. I though “Oh yes, that’s progress” so of course it will come, it must come. It’s also very complex so nobody else but such large entities with so much resources can do it.

    Then… you start to encounter more and more vaporware. Grandiose announcement and when you try the result you can’t help but be disappointed. You compare what was promised with the result, think it’s cool, kind of, shrug, and move on with your day. It happens again, and again. Sometimes you see something really impressive, you dig and realize it’s a partnership with a startup or a university doing the actual research. The more time passes, the more you realize that all BigTech do it, across technologies. You also realize that your artist friend did something just as cool and as open-source. Their version does not look polished but it works. You find a KickStarter about a product that is genuinely novel (say Oculus DK1) and has no link (initially) with BigTech…

    You finally realize, year after year, you have been brain washed to believe only BigTech can do it. It’s false. It’s self serving BS to both prevent you from building and depend on them.

    You can build, we can build and we can build better.

    Can we build AGI? Maybe. Can they build AGI? They sure want us to believe it but they have lied through their teeth before so until they do deliver, they can NOT.

    TL;DR: BigTech is not as powerful as they claim to be and they benefit from the hype, in this AI hype cycle and otherwise. They can’t be trusted.