Next presentation: https://slideslive.com/38921521 Developing machine learning systems that address social biases and potential misuse requires creative approaches. Recently, we have seen the development of models that contain substantial social biases, as well as models ripe for potential misuse in relation to neural disinformation. In this lecture, I examine how creative technologists, researchers in the digital humanities, and machine learning researchers can work together to develop more robust defenses. I discuss how a deep understanding of historical context can inform current socio-political impacts of neural disinformation and bias in machine learning models. I then highlight research that operationalizes these models, and discuss how we might foster critical interactions between research fields to combat bias and misuse moving forward.