Template_file can't use attributes of another data source created in the same config

My Terraform version (we aren’t yet ready to migrate to 0.12):

$ terraform version
Terraform v0.11.14
+ provider.aws v2.25.0
+ provider.kubernetes v1.9.0
+ provider.template v2.1.2

I’ve written a Terraform module which deploys a logstash DaemonSet to a Kubernetes cluster. I’m trying to use a template_file data source to template the logstash pipeline config file, but it’s failing to access the attributes of an earlier data source that it needs for the template. Here are the relevant parts of the config:

resource "kubernetes_service_account" "logstash" {
  metadata {
    name      = "logstash"
    namespace = "kube-system"
  }
}

data "kubernetes_secret" "logstash_service_account_token" {
  metadata {
    name      = "${kubernetes_service_account.logstash.default_secret_name}"
    namespace = "kube-system"
  }
}

data "template_file" "logstash_config" {
  template = "${file("${path.module}/logstash-pipeline.conf")}"

  vars {
    service_account_token      = "${data.kubernetes_secret.logstash_service_account_token.data.token}"
  }
}

When I try to plan this, I get this error:

Error: Error running plan: 1 error occurred:
        * module.logstash.data.template_file.logstash_config: 1 error occurred:
        * module.logstash.data.template_file.logstash_config: Resource 'data.kubernetes_secret.logstash_service_account_token' does not have attribute 'data.token' for variable 'data.kubernetes_secret.logstash_service_account_token.data.token'

I’ve tried to address this by adding dependencies to the template_file data source for either or both of “data.kubernetes_secret.logstash_service_account_token” and “kubernetes_service_account.logstash”, but that’s not fixing it.

This should be able to work, according to the Data Source documentation:

Query constraint arguments may refer to values that cannot be determined until after configuration is applied, such as the id of a managed resource that has not been created yet. In this case, reading from the data source is deferred until the apply phase, and any references to the results of the data resource elsewhere in configuration will themselves be unknown until after the configuration has been applied.

Is it the fact that this data source is chaining through another data source that’s breaking it?

Is there a way to resolve this?

Note, in case anyone asks: we’re using the kubernetes_metadata logstash plugin, and one of its limitations is that it requires a service account token hard-coded in the logstash pipeline config, which is why I’m trying to do this. A possible workaround that I’ll explore is putting the service account token in an environment variable instead, because logstash config can refer to environment variables.