Ontology (the theory of what really is, or the what is there ‘objectively’) makes conflict possible: e.g. without further auxillary hypotheses, there is no incompatibility between “Alice thinks there is a saber-toothed tiger coming” and “Bob thinks there is no such tiger coming”. We can think of the social function of the whole concept of objectivity as serving this purpose.
Clearly it’s useful in some situations, especially ‘ordinary empirical descriptive’ (OED) language (‘the frog is on the log’). So, we would lose the ability to cope with the world in important ways if we were to adopt a thoroughgoing idealism.
However, outside of the ‘home language-game’ of OED vocabulary, do we really need it? Could it be causing unnecessary conflict when pulled outside of its original motivating context, such as the tooth pain example?
For whatever sociohistorical reasons (which would be interesting to think about), objectivity-talk is incredibly pervasive. It takes serious work to unlearn it, to show one can coherently grapple with non-OED concepts without it. One example of this is detailed here: we’re primed to think of sentience as some objective phenomenon (because we’re primed to be descriptivists), but this is entirely unnecessary. We can completely sidestep talk of the ontological nature of sentience and get along perfectly fine. In fact, given a real world problem that depends on this issue (e.g. “Is this AI a sentient being?") it is a net positive because we can focus our attention on relevant things (our social practices relating to the AI, rather than some feature purely of the source code).
However, if you want power over others, couching your beliefs in objectivity talk is useful for gaining authority.