ये हंगाम-ए-विद-ए-शब है ज़ुल्मत के फ़रज़ंदो,

सहर के दोश पर गुलनार परचम हम भी देखेंगे,

तुम्हें भी देखना होगा ये आलम हम भी देखेंगे

– Sahir Ludhianvi

  • 1 Post
  • 11 Comments
Joined 25 days ago
cake
Cake day: March 13th, 2025

help-circle
  • nargis@lemmy.dbzer0.comtoMemes@lemmy.mlStalin the mysagonist
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    12
    ·
    edit-2
    5 days ago

    Yeah because working outside and still doing all the domestic work is so much better than being confined to the house. Who needs feminism?

    No doubt the Soviet Union was a huge step forward for women but this is just a dumb thing to say. Women doing unpaid household labour and emotional labour has always been the case.










  • nargis@lemmy.dbzer0.comtoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    16 days ago

    Narcissus poeticus, or poet’s daffodils are called ‘nargis’ (nuh-ruh-giiis) in a a number of languages. The root word is Persian. It is often used in poetry as a metaphor for a person’s eyes (generally your girlfriend) as it is supposed to be ‘eye shaped’. There is, of course, the Greek myth which is often alluded to in English literature. It’s also a pretty flower.




  • eliminates mention of “AI safety”

    AI datasets tend to have a white bias. White people are over-represented in photographs, for instance. If one trains AI to with such datasets in something like facial recognition( with mostly white faces), it will be less likely to identify non-white people as human. Combine this with self-driving cars and you have a recipe for disaster; since AI is bad at detecting non-white people, it is less likely to prevent them from being crushed underneath in an accident. This both stupid and evil. You cannot always account for any unconscious bias in datasets.

    “reducing ideological bias, to enable human flourishing and economic competitiveness.”

    They will fill it with capitalist Red Scare propaganda.

    The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.

    Interesting.

    “The AI future is not going to be won by hand-wringing about safety,” Vance told attendees from around the world.

    That was done before. A chatbot named Tay was released into the wilds of twitter in 2016 without much ‘hand-wringing about safety’. It turned into a neo-Nazi, which, I suppose is just what Edolf Musk wants.

    The researcher who warned that the change in focus could make AI more unfair and unsafe also alleges that many AI researchers have cozied up to Republicans and their backers in an effort to still have a seat at the table when it comes to discussing AI safety. “I hope they start realizing that these people and their corporate backers are face-eating leopards who only care about power,” the researcher says.