trait TsdbBase extends StrictLogging
Core of time series database processing pipeline.
- Alphabetic
- By Inheritance
- TsdbBase
- StrictLogging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- abstract type Collection[_]
Type of collection used in this TSDB instance and the DAO.
Type of collection used in this TSDB instance and the DAO. The default implementation uses Iterator as a collection type. Spark based implementation uses RDD.
- abstract type Result <: TsdbResultBase[Collection]
Abstract Value Members
- abstract def applyWindowFunctions(queryContext: QueryContext, keysAndValues: Collection[BatchDataset]): Collection[BatchDataset]
- abstract def calculatorFactory: ExpressionCalculatorFactory
- abstract def changelogDao: ChangelogDao
- abstract def createMetricCollector(query: Query, user: YupanaUser): MetricQueryCollector
- abstract def dao: TSDao[Collection, Long]
- abstract def externalLinkServices: Iterable[ExternalLinkService[_]]
- abstract val extractBatchSize: Int
Batch size for reading values from external links
- abstract def finalizeQuery(queryContext: QueryContext, rows: Collection[BatchDataset], metricCollector: MetricQueryCollector): Result
- abstract def linkService(catalog: ExternalLink): ExternalLinkService[_ <: ExternalLink]
- abstract def permissionService: PermissionService
- abstract def prepareQuery: (Query) => Query
- abstract val putBatchSize: Int
Batch size for writing values to external links
- abstract def registerExternalLink(catalog: ExternalLink, catalogService: ExternalLinkService[_ <: ExternalLink]): Unit
- abstract def schema: Schema
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def fillPlaceholders(c: Condition, startTime: Time, params: IndexedSeq[Any]): Condition
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logger: Logger
- Attributes
- protected
- Definition Classes
- StrictLogging
- def mapReduceEngine(metricCollector: MetricQueryCollector): MapReducible[Collection]
- def mergeCondition(facs: Seq[FlatAndCondition]): Option[Condition]
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def processRows(queryContext: QueryContext, metricCollector: MetricQueryCollector, mr: MapReducible[Collection], rows: Collection[BatchDataset], startTime: Time, params: IndexedSeq[Any]): Result
- def put(dataPoints: Collection[DataPoint], user: YupanaUser = YupanaUser.ANONYMOUS): Unit
- def putBatch(table: Table, batch: BatchDataset, user: YupanaUser): Unit
- def putDataset(table: Table, dataset: Collection[BatchDataset], user: YupanaUser): Unit
- def putDataset(tables: Seq[Table], dataset: Collection[BatchDataset], user: YupanaUser): Unit
- def query(query: Query, startTime: Time = Time(System.currentTimeMillis()), params: IndexedSeq[Any] = IndexedSeq.empty, user: YupanaUser = YupanaUser.ANONYMOUS): Result
Query pipeline.
Query pipeline. Perform following stages:
- creates queries for DAO - call DAO query to get Collection of rows - fills the rows with external links values - extract KeyData and ValueData - apply value filters - window function application - apply aggregation: map, reduce, post-map - post reduce arithmetics - extract field values
The pipeline is not responsible for limiting. This means that collection have to be lazy, to avoid extra calculations if limit is defined.
- def readExternalLinks(queryContext: QueryContext, ds: BatchDataset): Unit
- def substituteLinks(flatAndConditions: Seq[FlatAndCondition], startTime: Time, user: YupanaUser, metricCollector: MetricQueryCollector): Seq[FlatAndCondition]
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)