bi_etl.lookups.lookup module

Created on Feb 26, 2015

@author: Derek Wood

class bi_etl.lookups.lookup.Lookup(lookup_name: str, lookup_keys: list, parent_component: ETLComponent, config: BI_ETL_Config_Base = None, use_value_cache: bool = True, **kwargs)[source]

Bases: Iterable

COLLECTION_INDEX = datetime.datetime(1900, 1, 1, 0, 0)
DB_LOOKUP_WARNING = 1000
ROW_TYPES

alias of Union[Row, Sequence]

VERSION_COLLECTION_TYPE

alias of OOBTree

__init__(lookup_name: str, lookup_keys: list, parent_component: ETLComponent, config: BI_ETL_Config_Base = None, use_value_cache: bool = True, **kwargs)[source]
add_size_to_stats() None[source]
cache_row(row: Row, allow_update: bool = True, allow_insert: bool = True)[source]

Adds the given row to the cache for this lookup.

Parameters:
  • row (Row) – The row to cache

  • allow_update (boolean) – Allow this method to update an existing row in the cache.

  • allow_insert (boolean) – Allow this method to insert a new row into the cache

Raises:

ValueError – If allow_update is False and an already existing row (lookup key) is passed in.

cache_set(lk_tuple: tuple, version_collection: OOBTree[datetime, Row], allow_update: bool = True)[source]

Adds the given set of rows to the cache for this lookup.

Parameters:
  • lk_tuple – The key tuple to store the rows under

  • version_collection – The set of rows to cache

  • allow_update (boolean) – Allow this method to update an existing row in the cache.

Raises:

ValueError – If allow_update is False and an already existing row (lookup key) is passed in.

check_estimate_row_size(force_now=False)[source]
clear_cache() None[source]

Removes cache and resets to un-cached state

commit()[source]

Placeholder for other implementations that might need it

estimated_row_size()[source]
find(row: ROW_TYPES, fallback_to_db: bool = True, maintain_cache: bool = True, stats: Statistics = None, **kwargs) Row[source]
find_in_cache(row: ROW_TYPES, **kwargs) Row[source]

Find a matching row in the lookup based on the lookup index (keys)

find_in_remote_table(row: ROW_TYPES, **kwargs) Row[source]

Find a matching row in the lookup based on the lookup index (keys)

Only works if parent_component is based on bi_etl.components.readonlytable

Parameters:

row – The row with keys to search row

Return type:

A row

find_matches_in_cache(row: ROW_TYPES, **kwargs) Sequence[Row][source]
find_versions_list(row: ROW_TYPES, fallback_to_db: bool = True, maintain_cache: bool = True, stats: Statistics = None) list[source]
Parameters:
  • row – row or tuple to find

  • fallback_to_db – Use db to search if not found in cached copy

  • maintain_cache – Add DB lookup rows to the cached copy?

  • stats – Statistics to maintain

Return type:

A MutableMapping of rows

find_versions_list_in_remote_table(row: ROW_TYPES) list[source]
find_where(key_names: Sequence, key_values_dict: Mapping, limit: int = None)[source]

Scan all cached rows (expensive) to find list of rows that match criteria.

get_disk_size() int[source]
get_hashable_combined_key(row: ROW_TYPES) Sequence[source]
get_list_of_lookup_column_values(row: ROW_TYPES) list[source]
get_memory_size() int[source]
get_versions_collection(row: ROW_TYPES) MutableMapping[datetime, Row][source]

This method exists for compatibility with range caches

Parameters:

row – The row with keys to search row

Return type:

A MutableMapping of rows

has_done_get_estimate_row_size()[source]
has_row(row: ROW_TYPES) bool[source]

Does the row exist in the cache (for any date if it’s a date range cache)

Parameters:

row

init_cache() None[source]

Initializes the cache as empty.

property lookup_keys_set
report_on_value_cache_effectiveness(lookup_name: str = None)[source]
row_iteration_header_has_lookup_keys(row_iteration_header: RowIterationHeader) bool[source]
static rstrip_key_value(val: object) object[source]

Since most, if not all, DBs consider two strings that only differ in trailing blanks to be equal, we need to rstrip any string values so that the lookup does the same.

Parameters:

val

Returns:

uncache_row(row: ROW_TYPES)[source]
uncache_set(row: ROW_TYPES)[source]
uncache_where(key_names: Sequence, key_values_dict: Mapping)[source]

Scan all cached rows (expensive) to find rows to remove.