config_wrangler.config_templates.credentials module

pydantic model config_wrangler.config_templates.credentials.Credentials[source]

Bases: ConfigHierarchy

Config:
  • validate_default: bool = True

  • validate_assignment: bool = True

  • validate_credentials: bool = True

Fields:
Validators:
field keepass: KeepassConfig | None = None

If the password_source is KEEPASS, then load a sub-section with the config_wrangler.config_templates.keepass_config.KeepassConfig) settings

Validated by:
field keepass_config: str | None = 'keepass'

If the password_source is KEEPASS, then which root level config item contains the settings for Keepass (must be an instance of config_wrangler.config_templates.keepass_config.KeepassConfig)

Validated by:
field keepass_group: str | None = None

If the password_source is KEEPASS, which group in the Keepass database should be searched for an entry with a matching entry.

If is None, then the KeepassConfig.default_group value will be checked. If that is also None, then a ValueError will be raised.

Validated by:
field keepass_title: str | None = None

If the password_source is KEEPASS, this is an optional filter on the title of the keepass entries in the group.

Validated by:
field keyring_section: str | None = None

If the password_source is KEYRING, then which section (AKA system) should this module look for the password in.

See https://pypi.org/project/keyring/ or https://github.com/jaraco/keyring

Validated by:
field password_source: PasswordSource | None = None

The source to use when getting a password for the user. See PasswordSource for valid values.

Validated by:
field raw_password: str | None = None

This is only used for the extremely non-secure CONFIG_FILE password source. The password is stored directly in the config file next to the user_id with the setting name raw_password

Validated by:
field user_id: str | None = None

The user ID to use

Validated by:
field validate_password_on_load: bool = True

Should config_wrangler query the password source for this password at load (startup) time? If so, it will raise an error if the password is None or an empty string. It does not actually connect or authenticate the user_id & password combination.

Validated by:
__init__(**data: Any) None

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Uses something other than self the first arg to allow “self” as a settable attribute

add_child(name: str, child_object: ConfigHierarchy)

Set this configuration as a child in the hierarchy of another config. For any programmatically created config objects this is required so that the new object ‘knows’ where it lives in the hierarchy – most importantly so that it can find the hierarchies root object.

validator check_model  »  all fields[source]
classmethod construct(_fields_set: set[str] | None = None, **values: Any) Model
copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: Dict[str, Any] | None = None, deep: bool = False) Model

Returns a copy of the model.

!!! warning “Deprecated”

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

`py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `

Parameters:
  • include – Optional set or mapping specifying which fields to include in the copied model.

  • exclude – Optional set or mapping specifying which fields to exclude in the copied model.

  • update – Optional dictionary of field-value pairs to override field values in the copied model.

  • deep – If True, the values of fields that are Pydantic models will be deep-copied.

Returns:

A copy of the model with included, excluded and updated fields as specified.

dict(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) Dict[str, Any]
classmethod from_orm(obj: Any) Model
full_item_name(item_name: str = None, delimiter: str = ' -> ')

The fully qualified name of this config item in the config hierarchy.

get(section, item, fallback=Ellipsis)

Used as a drop in replacement for ConfigParser.get() with dynamic config field names (using a string variable for the section and item names instead of python code attribute access)

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

get_copy(copied_by: str = 'get_copy') ConfigHierarchy

Copy this configuration. Useful when you need to programmatically modify a configuration without modifying the original base configuration.

get_list(section, item, fallback=Ellipsis) list

Used as a drop in replacement for ConfigParser.get() + list parsing with dynamic config field names (using a string variable for the section and item names instead of python code attribute access) that is then parsed as a list.

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

get_password() str[source]

Get the password for this resource. password_source controls where it looks for the password. If that is None, then the root level passwords container is checked for password_source value.

getboolean(section, item, fallback=Ellipsis) bool

Used as a drop in replacement for ConfigParser.getboolean() with dynamic config field names (using a string variable for the section and item names instead of python code attribute access)

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

json(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = PydanticUndefined, models_as_dict: bool = PydanticUndefined, **dumps_kwargs: Any) str
classmethod model_construct(_fields_set: set[str] | None = None, **values: Any) Model

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters:
  • _fields_set – The set of field names accepted for the Model instance.

  • values – Trusted or pre-validated data dictionary.

Returns:

A new instance of the Model class with validated data.

model_copy(*, update: dict[str, Any] | None = None, deep: bool = False) Model

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

Returns a copy of the model.

Parameters:
  • update – Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

  • deep – Set to True to make a deep copy of the model.

Returns:

New model instance.

model_dump(*, mode: Literal['json', 'python'] | str = 'python', include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) dict[str, Any]

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:
  • mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.

  • include – A list of fields to include in the output.

  • exclude – A list of fields to exclude from the output.

  • by_alias – Whether to use the field’s alias in the dictionary key if defined.

  • exclude_unset – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults – Whether to exclude fields that are set to their default value.

  • exclude_none – Whether to exclude fields that have a value of None.

  • round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings – Whether to log warnings when invalid fields are encountered.

Returns:

A dictionary representation of the model.

model_dump_json(*, indent: int | None = None, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) str

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

Generates a JSON representation of the model using Pydantic’s to_json method.

Parameters:
  • indent – Indentation to use in the JSON output. If None is passed, the output will be compact.

  • include – Field(s) to include in the JSON output.

  • exclude – Field(s) to exclude from the JSON output.

  • by_alias – Whether to serialize using field aliases.

  • exclude_unset – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults – Whether to exclude fields that are set to their default value.

  • exclude_none – Whether to exclude fields that have a value of None.

  • round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings – Whether to log warnings when invalid fields are encountered.

Returns:

A JSON string representation of the model.

model_dump_non_private(*, mode: Literal['json', 'python'] | str = 'python', exclude: Set[str] = None) dict[str, Any]
classmethod model_json_schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}', schema_generator: type[~pydantic.json_schema.GenerateJsonSchema] = <class 'pydantic.json_schema.GenerateJsonSchema'>, mode: ~typing.Literal['validation', 'serialization'] = 'validation') dict[str, Any]

Generates a JSON schema for a model class.

Parameters:
  • by_alias – Whether to use attribute aliases or not.

  • ref_template – The reference template.

  • schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

  • mode – The mode in which to generate the schema.

Returns:

The JSON schema for the given model class.

classmethod model_parametrized_name(params: tuple[type[Any], ...]) str

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

params – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

Returns:

String representing the new class where params are passed to cls as type variables.

Raises:

TypeError – Raised when trying to generate concrete names for non-generic models.

model_post_init(_ModelMetaclass__context: Any) None

We need to both initialize private attributes and call the user-defined model_post_init method.

classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: dict[str, Any] | None = None) bool | None

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:
  • force – Whether to force the rebuilding of the model schema, defaults to False.

  • raise_errors – Whether to raise errors, defaults to True.

  • _parent_namespace_depth – The depth level of the parent namespace, defaults to 2.

  • _types_namespace – The types namespace, defaults to None.

Returns:

Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.

classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: dict[str, Any] | None = None) Model

Validate a pydantic model instance.

Parameters:
  • obj – The object to validate.

  • strict – Whether to enforce types strictly.

  • from_attributes – Whether to extract data from object attributes.

  • context – Additional context to pass to the validator.

Raises:

ValidationError – If the object could not be validated.

Returns:

The validated model instance.

classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model

Usage docs: https://docs.pydantic.dev/2.6/concepts/json/#json-parsing

Validate the given JSON data against the Pydantic model.

Parameters:
  • json_data – The JSON data to validate.

  • strict – Whether to enforce types strictly.

  • context – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Raises:

ValueError – If json_data is not a JSON string.

classmethod model_validate_strings(obj: Any, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model

Validate the given object contains string data against the Pydantic model.

Parameters:
  • obj – The object contains string data to validate.

  • strict – Whether to enforce types strictly.

  • context – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

classmethod parse_file(path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}') Dict[str, Any]
classmethod schema_json(*, by_alias: bool = True, ref_template: str = '#/$defs/{model}', **dumps_kwargs: Any) str
set_as_child(name: str, other_config_item: ConfigHierarchy)
static translate_config_data(config_data: MutableMapping)

Children classes can provide translation logic to allow older config files to be used with newer config class definitions.

classmethod update_forward_refs(**localns: Any) None
classmethod validate(value: Any) Model
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

property model_extra: dict[str, Any] | None

Get extra fields set during validation.

Returns:

A dictionary of extra fields, or None if config.extra is not set to “allow”.

property model_fields_set: set[str]

Returns the set of fields that have been explicitly set on this model instance.

Returns:

A set of strings representing the fields that have been set,

i.e. that were not filled from defaults.

pydantic model config_wrangler.config_templates.credentials.PasswordDefaults[source]

Bases: ConfigHierarchy

Config:
  • validate_default: bool = True

  • validate_assignment: bool = True

  • validate_credentials: bool = True

Fields:
field keepass: KeepassConfig | None = None

If the password_source is KEEPASS, then load a sub-section with the config_wrangler.config_templates.keepass_config.KeepassConfig) settings

field keepass_config: str = 'keepass'

If the password_source is KEEPASS, then which root level config item contains the settings for Keepass (must be an instance of config_wrangler.config_templates.keepass_config.KeepassConfig)

field password_source: PasswordSource | None = None
__init__(**data: Any) None

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Uses something other than self the first arg to allow “self” as a settable attribute

add_child(name: str, child_object: ConfigHierarchy)

Set this configuration as a child in the hierarchy of another config. For any programmatically created config objects this is required so that the new object ‘knows’ where it lives in the hierarchy – most importantly so that it can find the hierarchies root object.

classmethod construct(_fields_set: set[str] | None = None, **values: Any) Model
copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: Dict[str, Any] | None = None, deep: bool = False) Model

Returns a copy of the model.

!!! warning “Deprecated”

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

`py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `

Parameters:
  • include – Optional set or mapping specifying which fields to include in the copied model.

  • exclude – Optional set or mapping specifying which fields to exclude in the copied model.

  • update – Optional dictionary of field-value pairs to override field values in the copied model.

  • deep – If True, the values of fields that are Pydantic models will be deep-copied.

Returns:

A copy of the model with included, excluded and updated fields as specified.

dict(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) Dict[str, Any]
classmethod from_orm(obj: Any) Model
full_item_name(item_name: str = None, delimiter: str = ' -> ')

The fully qualified name of this config item in the config hierarchy.

get(section, item, fallback=Ellipsis)

Used as a drop in replacement for ConfigParser.get() with dynamic config field names (using a string variable for the section and item names instead of python code attribute access)

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

get_copy(copied_by: str = 'get_copy') ConfigHierarchy

Copy this configuration. Useful when you need to programmatically modify a configuration without modifying the original base configuration.

get_list(section, item, fallback=Ellipsis) list

Used as a drop in replacement for ConfigParser.get() + list parsing with dynamic config field names (using a string variable for the section and item names instead of python code attribute access) that is then parsed as a list.

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

getboolean(section, item, fallback=Ellipsis) bool

Used as a drop in replacement for ConfigParser.getboolean() with dynamic config field names (using a string variable for the section and item names instead of python code attribute access)

Warning

With this method Python code checkers (linters) will not warn about invalid config items. You can end up with runtime AttributeError errors.

json(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = PydanticUndefined, models_as_dict: bool = PydanticUndefined, **dumps_kwargs: Any) str
classmethod model_construct(_fields_set: set[str] | None = None, **values: Any) Model

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters:
  • _fields_set – The set of field names accepted for the Model instance.

  • values – Trusted or pre-validated data dictionary.

Returns:

A new instance of the Model class with validated data.

model_copy(*, update: dict[str, Any] | None = None, deep: bool = False) Model

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

Returns a copy of the model.

Parameters:
  • update – Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

  • deep – Set to True to make a deep copy of the model.

Returns:

New model instance.

model_dump(*, mode: Literal['json', 'python'] | str = 'python', include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) dict[str, Any]

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:
  • mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.

  • include – A list of fields to include in the output.

  • exclude – A list of fields to exclude from the output.

  • by_alias – Whether to use the field’s alias in the dictionary key if defined.

  • exclude_unset – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults – Whether to exclude fields that are set to their default value.

  • exclude_none – Whether to exclude fields that have a value of None.

  • round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings – Whether to log warnings when invalid fields are encountered.

Returns:

A dictionary representation of the model.

model_dump_json(*, indent: int | None = None, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) str

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

Generates a JSON representation of the model using Pydantic’s to_json method.

Parameters:
  • indent – Indentation to use in the JSON output. If None is passed, the output will be compact.

  • include – Field(s) to include in the JSON output.

  • exclude – Field(s) to exclude from the JSON output.

  • by_alias – Whether to serialize using field aliases.

  • exclude_unset – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults – Whether to exclude fields that are set to their default value.

  • exclude_none – Whether to exclude fields that have a value of None.

  • round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings – Whether to log warnings when invalid fields are encountered.

Returns:

A JSON string representation of the model.

model_dump_non_private(*, mode: Literal['json', 'python'] | str = 'python', exclude: Set[str] = None) dict[str, Any]
classmethod model_json_schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}', schema_generator: type[~pydantic.json_schema.GenerateJsonSchema] = <class 'pydantic.json_schema.GenerateJsonSchema'>, mode: ~typing.Literal['validation', 'serialization'] = 'validation') dict[str, Any]

Generates a JSON schema for a model class.

Parameters:
  • by_alias – Whether to use attribute aliases or not.

  • ref_template – The reference template.

  • schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

  • mode – The mode in which to generate the schema.

Returns:

The JSON schema for the given model class.

classmethod model_parametrized_name(params: tuple[type[Any], ...]) str

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

params – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

Returns:

String representing the new class where params are passed to cls as type variables.

Raises:

TypeError – Raised when trying to generate concrete names for non-generic models.

model_post_init(__context: Any) None

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that’s what pydantic-core passes when calling it.

Parameters:
  • self – The BaseModel instance.

  • __context – The context.

classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: dict[str, Any] | None = None) bool | None

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:
  • force – Whether to force the rebuilding of the model schema, defaults to False.

  • raise_errors – Whether to raise errors, defaults to True.

  • _parent_namespace_depth – The depth level of the parent namespace, defaults to 2.

  • _types_namespace – The types namespace, defaults to None.

Returns:

Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.

classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: dict[str, Any] | None = None) Model

Validate a pydantic model instance.

Parameters:
  • obj – The object to validate.

  • strict – Whether to enforce types strictly.

  • from_attributes – Whether to extract data from object attributes.

  • context – Additional context to pass to the validator.

Raises:

ValidationError – If the object could not be validated.

Returns:

The validated model instance.

classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model

Usage docs: https://docs.pydantic.dev/2.6/concepts/json/#json-parsing

Validate the given JSON data against the Pydantic model.

Parameters:
  • json_data – The JSON data to validate.

  • strict – Whether to enforce types strictly.

  • context – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Raises:

ValueError – If json_data is not a JSON string.

classmethod model_validate_strings(obj: Any, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model

Validate the given object contains string data against the Pydantic model.

Parameters:
  • obj – The object contains string data to validate.

  • strict – Whether to enforce types strictly.

  • context – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

classmethod parse_file(path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}') Dict[str, Any]
classmethod schema_json(*, by_alias: bool = True, ref_template: str = '#/$defs/{model}', **dumps_kwargs: Any) str
set_as_child(name: str, other_config_item: ConfigHierarchy)
static translate_config_data(config_data: MutableMapping)

Children classes can provide translation logic to allow older config files to be used with newer config class definitions.

classmethod update_forward_refs(**localns: Any) None
classmethod validate(value: Any) Model
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

property model_extra: dict[str, Any] | None

Get extra fields set during validation.

Returns:

A dictionary of extra fields, or None if config.extra is not set to “allow”.

property model_fields_set: set[str]

Returns the set of fields that have been explicitly set on this model instance.

Returns:

A set of strings representing the fields that have been set,

i.e. that were not filled from defaults.

class config_wrangler.config_templates.credentials.PasswordSource(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: StrEnum

CONFIG_FILE = 'CONFIG_FILE'
ENVIRONMENT = 'ENVIRONMENT'
KEEPASS = 'KEEPASS'
KEYRING = 'KEYRING'
__init__(*args, **kwds)
capitalize()

Return a capitalized version of the string.

More specifically, make the first character have upper case and the rest lower case.

casefold()

Return a version of the string suitable for caseless comparisons.

center(width, fillchar=' ', /)

Return a centered string of length width.

Padding is done using the specified fill character (default is a space).

count(sub[, start[, end]]) int

Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.

encode(encoding='utf-8', errors='strict')

Encode the string using the codec registered for encoding.

encoding

The encoding in which to encode the string.

errors

The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.

endswith(suffix[, start[, end]]) bool

Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.

expandtabs(tabsize=8)

Return a copy where all tab characters are expanded using spaces.

If tabsize is not given, a tab size of 8 characters is assumed.

find(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

format(*args, **kwargs) str

Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).

format_map(mapping) str

Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).

index(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.

isdecimal()

Return True if the string is a decimal string, False otherwise.

A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.

isdigit()

Return True if the string is a digit string, False otherwise.

A string is a digit string if all characters in the string are digits and there is at least one character in the string.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.

islower()

Return True if the string is a lowercase string, False otherwise.

A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.

isnumeric()

Return True if the string is a numeric string, False otherwise.

A string is numeric if all characters in the string are numeric and there is at least one character in the string.

isprintable()

Return True if the string is printable, False otherwise.

A string is printable if all of its characters are considered printable in repr() or if it is empty.

isspace()

Return True if the string is a whitespace string, False otherwise.

A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.

istitle()

Return True if the string is a title-cased string, False otherwise.

In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.

isupper()

Return True if the string is an uppercase string, False otherwise.

A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.

join(iterable, /)

Concatenate any number of strings.

The string whose method is called is inserted in between each given string. The result is returned as a new string.

Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’

ljust(width, fillchar=' ', /)

Return a left-justified string of length width.

Padding is done using the specified fill character (default is a space).

lower()

Return a copy of the string converted to lowercase.

lstrip(chars=None, /)

Return a copy of the string with leading whitespace removed.

If chars is given and not None, remove characters in chars instead.

static maketrans()

Return a translation table usable for str.translate().

If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.

partition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing the original string and two empty strings.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.

replace(old, new, count=-1, /)

Return a copy with all occurrences of substring old replaced by new.

count

Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.

If the optional argument count is given, only the first count occurrences are replaced.

rfind(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

rindex(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

rjust(width, fillchar=' ', /)

Return a right-justified string of length width.

Padding is done using the specified fill character (default is a space).

rpartition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing two empty strings and the original string.

rsplit(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Splitting starts at the end of the string and works to the front.

rstrip(chars=None, /)

Return a copy of the string with trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

split(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.

splitlines(keepends=False)

Return a list of the lines in the string, breaking at line boundaries.

Line breaks are not included in the resulting list unless keepends is given and true.

startswith(prefix[, start[, end]]) bool

Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.

strip(chars=None, /)

Return a copy of the string with leading and trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

title()

Return a version of the string where each word is titlecased.

More specifically, words start with uppercased characters and all remaining cased characters have lower case.

translate(table, /)

Replace each character in the string using the given translation table.

table

Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.

The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.

upper()

Return a copy of the string converted to uppercase.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

The string is never truncated.