Hateful and racist remarks about Indigenous Australians have thrived in the comment sections of politicians' Facebook pages, an inquiry has heard.
In the lead-up to the Indigenous voice referendum, concerns about online hate speech and discrimination have grown.
"As a First Nations person, I can go to the comments section on other parliamentarians' pages and see really, really hateful things about my people," Labor senator Jana Stewart told a parliamentary hearing into digital platforms on Tuesday.
"We've heard about increased threats of physical violence, but (what about) mental health? First Nations people are already twice as likely to commit suicide, and that's even higher for our children who are on your (Meta's) platforms."
Meta's representative Mia Garlick said the company was aware of its role in the lead-up to the referendum.
"We're very mindful of the potential impact from the current public debate for Indigenous communities, and we want to do everything we can," she told the hearing.
Ms Garlick also said it was possible Facebook's tools did not always capture racist comments, but page owners could address issues themselves through moderation tools that allow them to block certain words and hide messages.
Meta has also partnered with mental health organisation Reach Out to provide support for First Nations people ahead of the referendum.
Over the past decade, the social media giant has come under intense scrutiny for allegedly pushing its users down extremist rabbit holes and fomenting hate.
Meta has been accused of fuelling violence and massacres against Myanmar's Rohingya ethnic group in 2017, and driving polarisation during the 2020 US election that led to the insurrection in Washington on January 6, 2021.
Greens senator David Shoebridge said Facebook was not taking responsibility for the role its algorithm played in driving extremism and was "prioritising profits and clickbait over keeping safe".
Ms Garlick said while she could not comment on whether the company played a role in the Rohingya massacre because it is "the subject of investigations and litigation", she said Meta had made strides in addressing hate.
The platform has used machine learning and artificial intelligence to proactively detect harmful comments, and now removes more than 80 per cent of detected hate speech.
She told the hearing it was in Meta's best interests to make users feel comfortable and prevent discrimination.
"The nature of our service is we don't have a long-term business if people have a negative experience on our platform," Ms Garlick said.
13YARN 13 92 76
Aboriginal Counselling Services 0410 539 905
Kat Wong - AAP