When we consider power imbalances between those who craft ML systems and those most vulnerable to the impacts of those systems, what is often enabling that power is the localization of control in the hands of tech companies and technical experts who consolidate power using claims to perceived scientific objectivity and legal protections of intellectual property. At the same time, there is a legacy in the scientific community of data being wielded as an instrument of oppression, often reinforcing inequality and perpetuating injustice. At Data for Black Lives, we bring together scientists and community-based activists to take collective action using data for fighting bias, building progressive movements, and promoting civic engagement. In the ML community, people often take for granted the initial steps in the process of crafting ML systems that involve data collection, storage and access. Researchers often engage with datasets as if they appeared spontaneously with no social context. One method of moving beyond fairness metrics and generic discussions of ethics to meaningfully shifting agency to the people most marginalized is to stop ignoring the context, construction and implications of the datasets we use for research. I offer two considerations for shifting power in this way: Intentional data narratives and Data trusts - an alternative to current strategies of data governance.